text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 204–210 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 204 Effective Adversarial Regularization for Neural Machine Translation Motoki Sato1, Jun Suzuki2,3, Shun Kiyono3,2 1Preferred Networks, Inc., 2Tohoku University, 3RIKEN Center for Advanced Intelligence Project [email protected], [email protected], [email protected] Abstract A regularization technique based on adversarial perturbation, which was initially developed in the field of image processing, has been successfully applied to text classification tasks and has yielded attractive improvements. We aim to further leverage this promising methodology into more sophisticated and critical neural models in the natural language processing field, i.e., neural machine translation (NMT) models. However, it is not trivial to apply this methodology to such models. Thus, this paper investigates the effectiveness of several possible configurations of applying the adversarial perturbation and reveals that the adversarial regularization technique can significantly and consistently improve the performance of widely used NMT models, such as LSTMbased and Transformer-based models.1 1 Introduction The existence of (small) perturbations that induce a critical prediction error in machine learning models was first discovered and discussed in the field of image processing (Szegedy et al., 2014). Such perturbed inputs are often referred to as adversarial examples in the literature. Subsequently, Goodfellow et al. (2015) proposed a learning framework that simultaneously leverages adversarial examples as additional training data for reducing the prediction errors. This learning framework is referred to as adversarial training. In the field of natural language processing (NLP), the input is a sequence of discrete symbols, such as words or sentences. Since it is unreasonable to add a small perturbation to the symbols, applying the idea of adversarial training to NLP tasks has been recognized as a challenging problem. Recently, Miyato et al. (2017) overcame this problem 1Our code for replicating the experiments in this paper is available at the following URL: https://github.com/ pfnet-research/vat_nmt Encoder Decoder   !" #$" !% #$% !& #$& '( #$( ) '" #$" ) '* #$+ )  ," ,% ,+-" Figure 1: An intuitive sketch that explains how we add adversarial perturbations to a typical NMT model structure for adversarial regularization. The definitions of ei and fj can be found in Eq. 2. Moreover, those of ˆri and ˆr0 j are in Eq. 8 and 13, respectively. and reported excellent performance improvements on multiple benchmark datasets of text classification task. The key idea of their success is to apply adversarial perturbations into the input embedding layer instead of the inputs themselves as used in image processing tasks. An important implication of their study is that their method can be interpreted as a regularization method, and thus, they do not focus on generating adversarial examples. We refer to this regularization technique as adversarial regularization. We aim to further leverage this promising methodology into more sophisticated and critical neural models, i.e., neural machine translation (NMT) models, since NMT models recently play one of the central roles in the NLP research community; NMT models have been widely utilized for not only NMT but also many other NLP tasks, such as text summarization (Rush et al., 2015; Chopra et al., 2016), grammatical error correction (Ji et al., 2017), dialog generation (Shang et al., 2015), and parsing (Vinyals et al., 2015; Suzuki et al., 2018). Unfortunately, this application is not fully trivial since we potentially have several configurations for applying adversarial perturbations into NMT models (see details in Section 5). Figure 1 illustrates the model architecture of NMT models with adversarial perturbation. Therefore, the goal of this paper is to re205 veal the effectiveness of the adversarial regularization in NMT models and encourage researchers/developers to apply the adversarial regularization as a common technique for further improving the performance of their NMT models. We investigate the effectiveness of several possible configurations that can significantly and consistently improve the performance of typical baseline NMT models, such as LSTM-based and Transformer-based models, 2 Related Work Several studies have recently applied adversarial training to NLP tasks, e.g., (Jia and Liang, 2017; Belinkov and Bisk, 2018; Hosseini et al., 2017; Samanta and Mehta, 2017; Miyato et al., 2017; Sato et al., 2018). For example, Belinkov and Bisk (2018); Hosseini et al. (2017) proposed methods that generate input sentences with random character swaps. They utilized the generated (input) sentences as additional training data. However, the main focus of these methods is the incorporation of adversarial examples in the training phase, which is orthogonal to our attention, adversarial regularization, as described in Section 1. Clark et al. (2018) used virtual adversarial training (VAT), which is a semi-supervised extension of the adversarial regularization technique originally proposed in Miyato et al. (2016), in their experiments to compare the results with those of their proposed method. Therefore, the focus of the neural models differs from this paper. Namely, they focused on sequential labeling, whereas we discuss NMT models. In parallel to our work, Wang et al. (2019) also investigated the effectiveness of the adversarial regularization technique in neural language modeling and NMT. They also demonstrated the impacts of the adversarial regularization technique in NMT models. We investigate the effectiveness of the several practical configurations that have not been examined in their paper, such as the combinations with VAT and back-translation. 3 Neural Machine Translation Model Model Definition In general, an NMT model receives a sentence as input and returns a corresponding (translated) sentence as output. Let Vs and Vt represent the vocabularies of the input and output sentences, respectively. xi and yj denote the one-hot vectors of the i-th and j-th tokens in input and output sentences, respectively, i.e. xi 2 {0, 1}|Vs| and yj 2 {0, 1}|Vt|. Here, we introduce a short notation xi:j for representing a sequence of vectors (xi, . . . , xj). To explain the NMT model concisely, we assume that its input and output are both sequences of one-hot vectors x1:I and y1:J that correspond to input and output sentences whose lengths are I and J, respectively. Thus, the NMT model approximates the following conditional probability: p(Y |X) = YJ+1 j=1 p(yj|y0:j−1, X), (1) where y0 and yJ+1 represent one-hot vectors of special beginning-of-sentence (BOS) and end-ofsentence (EOS) tokens, respectively, and X = x1:I and Y = y1:J+1. Let E 2 RD⇥|Vs| and F 2 RD⇥|Vt| be the encoder and decoder embedding matrices, respectively, where D is the dimension of the embedding vectors. Thus, p(yj|y0:j−1, X) in Eq. 1 is calculated as follows: p(yj|y0:j−1, X) = AttDec " fj, h1:I # , h1:I = Enc(e1:I), fj = F yj−1, ei = Exi, (2) where Enc(·) and AttDec(·) represent functions that abstract the entire encoder and decoder (with an attention mechanism) procedures, respectively. Training Phase Let D be the training data consisting of a set of pairs of Xn and Yn, namely, D = {(Xn, Yn)}N n=1, where N represents the amount of training data. For training, we generally seek the optimal parameters ˆ⇥that can minimize the following optimization problem: ˆ⇥= argmin ⇥ $ J (D, ⇥) , (3) J (D, ⇥) = −1 |D| X (X,Y )2D `(X, Y , ⇥), (4) `(X, Y , ⇥) = log " p(Y |X, ⇥) # , (5) where ⇥represents a set of trainable parameters in the NMT model. Generation Phase We generally use a K-best beam search to generate an output sentence with the (approximated) K-highest probability given input sentence X in the generation (test) phase. We omit to explain this part in detail as our focus is a regularization technique that is independent of the generation phase. 206 4 Adversarial Regularization This section briefly describes the adversarial regularization technique applied to the text classification tasks proposed in Miyato et al. (2017). Let ˆri 2 RD be an adversarial perturbation vector for the i-th word in input X. The perturbed input embedding e0 i 2 RD is computed for each encoder time-step i as follows: e0 i = Exi + ˆri. (6) 4.1 Adversarial Training (AdvT) To obtain the worst case perturbations as an adversarial perturbation in terms of minimizing the log-likelihood of given X, we seek the optimal solution ˆr by maximizing the following equation: ˆr = argmax r,||r||✏ n `(X, r, Y , ⇥) o , (7) where ✏is a scalar hyper-parameter that controls the norm of the perturbation, and r represents a concatenated vector of ri for all i. Here, `(X, r, Y , ⇥) represents an extension of Eq. 5, where the perturbation ri in r is applied to the position of ˆri as described in Eq. 6. However, it is generally infeasible to exactly estimate ˆr in Eq. 7 for deep neural models. As a solution, an approximation method was proposed by Goodfellow et al. (2015), where `(X, Y , r, ⇥) is linearized around X. This approximation method induces the following non-iterative solution for calculating ˆri for all encoder time-step i: ˆri =✏ai ||a||2 , ai = rei`(X, Y , ⇥). (8) Thus, based on adversarial perturbation ˆr, the loss function can be defined as: A(D, ⇥) = −1 |D| X (X,Y )2D `(X, ˆr, Y , ⇥). (9) Finally, we jointly minimize the objective functions J (D, ⇥) and A(D, ⇥): ˆ⇥= argmin ⇥ n J (D, ⇥) + λA(D, ⇥) o , (10) where λ is a scalar hyper-parameter that controls the balance of the two loss functions. 4.2 Virtual Adversarial Training (VAT) Miyato et al. (2016) proposed virtual adversarial training, which is mainly used for the semisupervised extension of the adversarial regularization technique. The difference appears in the loss function ` in Eq. 7 and 9. Specifically, we can use perturbations calculated based on the virtual adversarial training by substituting ` with the following loss function: `KL(X, ˆr, ·, ⇥) = KL " p(· |X, ⇥)||p(· |X, ˆr, ⇥) # , (11) where KL(·||·) denotes the KL divergence. It is worth noting here that, in our experiments, we never applied the semi-supervised learning, but used the above equation for calculating perturbation as the replacement of standard adversarial regularization. This means that the training data is identical in both settings. 5 Adversarial Regularization in NMT As strictly following the original definition of the conventional adversarial training, the straightforward approach to applying the adversarial perturbation is to add the perturbation into the encoderside embeddings ei as described in Eq. 6. However, NMT models generally have another embedding layer in the decoder-side, as we explained in Eq. 2. This fact immediately offers us also to consider applying the adversarial perturbation into the decoder-side embeddings fj. For example, let ˆr0 j 2 RD be an adversarial perturbation vector for the j-th word in output Y . The perturbed embedding f 0 j 2 RD is computed for each decoder time-step j as follows: f 0 j = F yj−1 + ˆr0 j. (12) Then similar to Eq. 8, we can calculate ˆr0 as: ˆr0 j =✏bj ||b||2 , bj = rfj`(X, Y , ⇥), (13) where b is a concatenated vector of bj for all j. In addition, we need to slightly modify the definition of r, which is originally the concatenation vector of all ri for all i, to the concatenation vector of all ri and r0 j for all i and j. Finally, we have three options for applying the perturbation into typical NMT models, namely, applying the perturbation into embeddings in the (1) encoder-side only, (2) decoder-side only, and (3) both encoder and decoder sides. 207 DE$EN FR$EN training 189,318 208,323 test2012 (dev) 1,700 1,124 test2013 (test) 993 1,024 test2014 (test) 1,305 1,305 Table 1: Number of sentences in our datasets (Datasets are cleaned from the original dataset). Perturbation EN!DE Model position test2013 test2014 LSTM (None) 27.73 23.98 +AdvT enc-emb 28.73 24.90 dec-emb 27.44 23.71 enc-dec-emb 28.47 24.78 +VAT enc-emb 29.03 24.75 dec-emb 27.49 23.20 enc-dec-emb 29.47 24.92 Transformer (None) 29.15 25.19 +AdvT enc-emb 29.04 25.16 dec-emb 28.95 25.75 enc-dec-emb 29.61 25.78 +VAT enc-emb 29.95 26.00 dec-emb 29.62 25.88 enc-dec-emb 30.13 26.06 Table 2: BLEU scores averaged over five models in various configurations of perturbation positions (enc-emb, dec-emb, or enc-dec-emb) and adversarial regularization techniques (AdvT or VAT). 6 Experiments 6.1 Datasets We conducted experiments on the IWSLT evaluation campaign dataset (Cettolo et al., 2012). We used the IWSLT 2016 training set for training models, 2012 test set (test2012) as the development set, and 2013 and 2014 test sets (test2013 and test2014) as our test sets. Table 1 shows the statistics of datasets used in our experiments. For preprocessing of our experimental datasets, we used the Moses tokenizer2 and the truecaser3. We removed sentences over 50 words from the training set. We also applied the byte-pair encoding (BPE) based subword splitting script4 with 16,000 merge operations (Sennrich et al., 2016b). 6.2 Model Configurations We selected two widely used model architectures, namely, LSTM-based encoder-decoder 2https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ tokenizer/tokenizer.perl 3https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ recaser/truecase.perl 4https://github.com/rsennrich/ subword-nmt used in Luong et al. (2015) and self-attentionbased encoder-decoder, the so-called Transformer (Vaswani et al., 2017). We adapted the hyper-parameters based on the several recent previous papers5. Hereafter, we refer to the model trained with the adversarial regularization (` in Eq. 7) as AdvT, and similarly, with the virtual adversarial training (`KL in Eq. 11) as VAT. We set λ = 1 and ✏= 1 for all AdvT and VAT experiments. 6.3 Results Investigation of effective configuration Table 2 shows the experimental results with configurations of perturbation positions (enc-emb, decemb, or enc-dec-emb) and adversarial regularization techniques (AdvT or VAT). As evaluation metrics, we used BLEU scores (Papineni et al., 2002)6. Note that all reported BLEU scores are averaged over five models. Firstly, in terms of the effective perturbation position, enc-dec-emb configurations, which add perturbations to both encoder and decoder embeddings, consistently outperformed other configurations, which used either encoder or decoder only. Moreover, we achieved better performance when we added perturbation to the encoder-side (encemb) rather than the decoder-side (dec-emb). Furthermore, the results of VAT was consistently better than those of AdvT. This tendency was also observed in the results reported by Miyato et al. (2016). As discussed in Kurakin et al. (2017), AdvT generates the adversarial examples from correct examples, and thus, the models trained by AdvT tend to overfit to training data rather than those trained by VAT. They referred to this phenomenon of AdvT as label leaking. Results on four language pairs Table 3 shows the BLEU scores of averaged over five models on four different language pairs (directions), namely German!English, French!English, English!German, and English!French. Furthermore, the row (b) shows the results obtained when we incorporated pseudo-parallel corpora generated using the back-translation method (Sennrich et al., 2016a) as additional training data. For 5The detailed hyper-parameters are listed in Appendix A. 6We used the multi-bleu.perl script in the Moses toolkit: https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ generic/multi-bleu.perl 208 Perturbation Model position (a) LSTM (None) Transformer (None) +VAT enc-dec-emb +VAT +AdvT enc-dec-emb (b) w/ BT Transformer enc-dec-emb +VAT enc-dec-emb +VAT +AdvT enc-dec-emb DE!EN test2013 test2014 32.71 28.53 34.22 30.19 35.06 31.10 35.50 30.88 35.44 31.08 36.43 32.53 36.49 32.39 FR!EN test2013 test2014 39.09 36.25 38.87 37.20 40.09 37.89 40.26 38.44 40.44 38.42 41.29 39.76 41.56 39.64 EN!DE test2013 test2014 27.73 23.98 29.15 25.19 30.13 26.06 30.04 26.33 30.73 26.02 31.99 27.20 31.29 27.05 EN!FR test2013 test2014 38.89 36.18 40.43 37.90 41.13 38.64 41.67 38.72 41.74 39.03 43.41 40.15 42.61 39.95 Table 3: BLEU scores averaged over five models in four different language pairs (directions). (b) Results with using training data increased by back-translation method (BT). Input meine gebildete Mutter aber wurde Lehrerin . Reference but my educated mother became a teacher . Baseline (Transformer) my educated mother , though , became a teacher . Proposed (Transformer+VAT w/ BT) but my educated mother became a teacher . Input aber man kann sehen , wie die Menschen miteinander kommunizieren , zu welchen Zeiten sie einander anrufen , wann sie zu Bett gehen . Reference but you can see how your people are communicating with each other , what times they call each other , when they go to bed . Baseline (Transformer) but you can see how people talk to each other about what time they call each other when they go to bed . Proposed (Transformer+VAT w/ BT) but you can see how people communicate with each other , at which time they call each other , when they go to bed . Input wer im Saal hat ein Handy dabei ? Reference who in the room has a mobile phone with you ? Baseline (Transformer) who in the room has a cell phone in it ? Proposed (Transformer+VAT w/ BT) who in the room has a cell phone with me ? Table 4: Example translation from German!English (test2013). generating the pseudo-parallel corpora, we used the WMT14 news translation corpus. We observe that Transformer+VAT consistently outperformed the baseline Transformer results in both standard (a) and back-translation (b) settings. We report that VAT did not require us to perform additional heavy hyper-parameter search (excluding the hyper-parameter search in base models). Therefore, we can expect that VAT can improve the translation performance on other datasets and settings with relatively highconfidence. In addition, the rows +VAT+AdvT show the performance obtained by applying both AdvT and VAT simultaneously. We can further improve the performance in some cases, but the improvement is not consistent among the datasets. Actual Translation Examples Table 4 shows actual translation examples generated by the models compared in our German!English translation setting. We observe that Transformer+VAT with using training data increased by the backtranslation method seems to generate higher quality translations compared with those of the baseline Transformer. 7 Conclusion This paper discussed the practical usage and benefit of adversarial regularization based on adversarial perturbation in the current NMT models. Our experimental results demonstrated that applying VAT to both encoder and decoder embeddings consistently outperformed other configurations. Additionally, we confirmed that adversarial regularization techniques effectively worked even if we performed them with the training data increased by a back-translation method. We believe that adversarial regularization can be one of the common and fundamental technologies to further improve the translation quality, such as model ensemble, byte-pair encoding, and back-translation. Acknowledgments We thank three anonymous reviewers for their helpful comments. We also thank Takeru Miyato, who gave us valuable comments about AdvT/VAT. 209 References Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and Natural Noise Both Break Neural Machine Translation. In Proceedings of the 6th International Conference on Learning Representations (ICLR). Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: Web Inventory of Transcribed and Translated Talks. In Proceedings of the 16th Annual Conference of the European Association for Machine Translation (EAMT), pages 261–268. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive Sentence Summarization with Attentive Recurrent Neural Networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 93–98. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-Supervised Sequence Modeling with Cross-View Training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1914–1925. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In Proceedings of the 3rd International Conference on Learning Representations (ICLR). Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving Google’s Perspective API Built for Detecting Toxic Comments. arXiv preprint arXiv:1702.08138. Jianshu Ji, Qinlong Wang, Kristina Toutanova, Yongen Gong, Steven Truong, and Jianfeng Gao. 2017. A Nested Attention Neural Hybrid Model for Grammatical Error Correction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 753–762. Robin Jia and Percy Liang. 2017. Adversarial Examples for Evaluating Reading Comprehension Systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP). Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. 2017. Adversarial Machine Learning at Scale. In Proceedings of the 5th International Conference on Learning Representations (ICLR). Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421. Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2017. Adversarial Training Methods for SemiSupervised Text Classification. In Proceedings of the 5th International Conference on Learning Representations (ICLR). Takeru Miyato, Shin ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2016. Distributional Smoothing with Virtual Adversarial Training. In Proceedings of the 4th International Conference on Learning Representations (ICLR). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 379–389. Suranjana Samanta and Sameep Mehta. 2017. Towards Crafting Text Adversarial Samples. arXiv preprint arXiv:1707.02812. Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable Adversarial Perturbation in Input Embedding Space for Text. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), pages 4323–4330. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1715–1725. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural Responding Machine for Short-Text Conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL & IJCNLP), pages 1577–1586. Jun Suzuki, Sho Takase, Hidetaka Kamigaito, Makoto Morishita, and Masaaki Nagata. 2018. An Empirical Study of Building a Strong Baseline for Constituency Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 612–618. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations (ICLR). 210 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), pages 6000–6010. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a Foreign Language. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS), pages 2773–2781. Dilin Wang, Chengyue Gong, and Qiang Liu. 2019. Improving Neural Language Modeling via Adversarial Training. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 6555–6565.
2019
20
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2079–2089 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2079 Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models Dinghan Shen1⇤, Asli Celikyilmaz2, Yizhe Zhang2, Liqun Chen1, Xin Wang3, Jianfeng Gao2, Lawrence Carin1 1 Duke University 2 Microsoft Research, Redmond 3 University of California, Santa Barbara [email protected] Abstract Variational autoencoders (VAEs) have received much attention recently as an end-toend architecture for text generation with latent variables. However, previous works typically focus on synthesizing relatively short sentences (up to 20 words), and the posterior collapse issue has been widely identified in text-VAEs. In this paper, we propose to leverage several multi-level structures to learn a VAE model for generating long, and coherent text. In particular, a hierarchy of stochastic layers between the encoder and decoder networks is employed to abstract more informative and semantic-rich latent codes. Besides, we utilize a multi-level decoder structure to capture the coherent long-term structure inherent in long-form texts, by generating intermediate sentence representations as highlevel plan vectors. Extensive experimental results demonstrate that the proposed multi-level VAE model produces more coherent and less repetitive long text compared to baselines as well as can mitigate the posterior-collapse issue. 1 Introduction The variational autoencoder (VAE) for text (Bowman et al., 2016) is a generative model in which a stochastic latent variable provides additional information to modulate the sequential text-generation process. VAEs have been used for various text processing tasks (Semeniuta et al., 2017; Zhao et al., 2017; Kim et al., 2018; Du et al., 2018; Hashimoto et al., 2018; Shen et al., 2018a; Xu and Durrett, 2018; Wang et al., 2019). While most recent work has focused on generating relatively short sequences (e.g., a single sentence or multiple sentences up to around twenty words), generating long-form text (e.g., a single or multiple ⇤This research was carried out during an internship at Microsoft Research. flat-VAE (standard) multilevel-VAE (our model) i went here for a grooming and a dog . it was very good . the owner is very nice and friendly . the owner is really nice and friendly . i don t know what they are doing . i have been going to this nail salon for over a year now . the last time i went there . the stylist was nice . but the lady who did my nails . she was very rude and did not have the best nail color i once had . the staff is very friendly and helpful . the only reason i can t give them 5 stars . the only reason i am giving the ticket is because of the ticket . can t help but the staff is so friendly and helpful . can t help but the parking lot is just the same . i am a huge fan of this place . my husband and i were looking for a place to get some good music . this place was a little bit pricey . but i was very happy with the service . the staff was friendly . Table 1: Comparison of samples generated from two generative models on the Yelp reviews dataset. The standard model struggles with repetitions of the same context or words (in blue), yielding non-coherent text. A hierarhical decoder with multi-layered latent variables eliminates redundancy and yields more coherent text planned around focused concepts. paragraphs) with deep latent-variable models has been less explored. Recurrent Neural Networks (RNNs) (Bahdanau et al., 2015; Chopra et al., 2016) have mainly been used for most text VAE models (Bowman et al., 2016). However, it may be difficult to scale RNNs for long-form text generation, as they tend to generate text that is repetitive, ungrammatical, selfcontradictory, overly generic and often lacking coherent long-term structure (Holtzman et al., 2018). Two samples of text generated using standard VAE with an RNN decoder is shown in Table 1. In this work, we propose various multi-level network structures for the VAE model (ml-VAE), to address coherency and repetitiveness challenges associated with long-form text generation. To generate globally-coherent long text sequences, it is desirable that both the higher-level abstract features (e.g., topic, sentiment, etc.) and lowerlevel fine-granularity details (e.g., specific word choices) of long text can be leveraged by the generative network. It’s difficult for a standard 2080 RNN to capture such structure and learn to planahead. To improve the model’s plan-ahead capability for capturing long-term dependency, following (Roberts et al., 2018), our first multi-level structure defines a hierarchical RNN decoder as the generative network that learns sentence- and word-level representations. Rather than using the latent code to initialize the RNN decoder directly, we found it more effective when first passing the latent code to a higher-level (sentence) RNN decoder, that outputs an embedding for the lowerlevel (word) RNN decoder that generates words. Since the low-level decoder network cannot fall back on autoregression, it gains a stronger reliance on the latent code to reconstruct the sequences. Prior work has found that VAE training often suffers from posterior collapse, in which the model ignores the latent code (Bowman et al., 2016). This issue is related to the fact that the decoder network is usually parametrized with an autoregressive neural network, such as RNNs with teacher forcing scheme (Bowman et al., 2016; Yang et al., 2017; Goyal et al., 2017; Semeniuta et al., 2017; Shen et al., 2018b). Several strategies have been proposed (see optimization challenges in Section 4) to make the decoder less autoregressive, so less contextual information is utilized by the decoder network (Yang et al., 2017; Shen et al., 2018b). We argue that learning more informative latent codes can enhance the generative model without the need to lessen the contextual information. We propose leveraging a hierarchy of latent variables between the convolutional inference (encoder) networks and a multi-level recurrent generative network (decoder). With multiple stochastic layers, the prior of bottom-level latent variable is inferred from the data, rather than fixed as a standard Gaussian distribution as in typical VAEs (Kingma and Welling, 2013). The induced latent code distribution at the bottom level can be perceived as a Gaussian mixture, and thus is endowed with more flexibility to abstract meaningful features from the input text. While recent work has explored structures for more informative latent codes (Kim et al., 2018; Gu et al., 2018), ml-VAE is conceptually simple and easy to implement. We evaluate ml-VAE on language modeling, unconditional and conditional text generation tasks. We show substantial improvements against several baseline methods in terms of perplexity on language modeling and quality of generated samples based on BLEU statistics and human evaluation. 2 Variational Autoencoder (VAE) Let x denote a text sequence, which consists of L tokens, i.e., x1, x2, ..., xL. A VAE encodes the text x using a recognition (encoder) model, qφ(z|x), parameterizing an approximate posterior distribution over a continuous latent variable z (whose prior is typically chosen as standard diagonalcovariance Gaussian). z is sampled stochastically from the posterior distribution, and text sequences x are generated conditioned on z, via a generative (decoder) network, denoted as p✓(x|z). A variational lower bound is typically used to estimate the parameters (Kingma and Welling, 2013): Lvae = Eqφ(z|x)  log p✓(x|z)p(z) qφ(z|x) " , (1) = Eqφ(z|x)[log p✓(x|z)] −DKL(qφ(z|x)||p(z)), This lower bound is composed of a reconstruction loss (first term) that encourages the inference network to encode information necessary to generate the data and a KL regularizer (second term) to push qφ(z|x) towards the prior p(z). Although VAEs have been shown to be effective in a wide variety of text processing tasks (see related work), there are two challenges associated with generating longer sequences with VAEs: (i) they lack a long-term planning mechanism, which is critical for generating semantically-coherent long texts (Serdyuk et al., 2017); and (ii) posterior collapse issue. Concerning (ii), it was demonstrated in (Bowman et al., 2016) that due to the autoregressive nature of the RNN, the decoder tends to ignore the information from z entirely, resulting in an extremely small KL term (see Section 4). 3 Multi-Level Generative Networks 3.1 Single Latent Variable (ml-VAE-S:) Our first multi-level model improves upon standard VAE models by introducing a plan-ahead ability to sequence generation. Instead of directly making word-level predictions only conditioned on the semantic information from z, a series of plan vectors are first generated based upon z with a sentence-level LSTM decoder (Li et al., 2015b). Our hypothesis is that an explicit design of (inherently hierarchical) paragraph structure can capture sentence-level coherence and potentially mitigate repetitiveness. Intuitively, when predicting each token, the decoder can use information from 2081 … Word-Level LSTM Decoder Sentence Level LSTM Decoder 𝒛𝟏 𝒛𝟏 𝒛𝟐 𝒛𝟐 KL Losses 𝜇ଶ 𝜎ଶ 𝜇ଵ 𝜎ଵ Lower Level CNN I love this place. Lots of veggie options. Higher Level CNN 𝒛𝟏 𝒛𝟏 𝒛𝟏 Generative (Decoder) Network Inference (Encoder) Network … … 𝒉௧ିଵ ௦ 𝒉௧ ௦ Try veggie quesadilla. I love this place. Lots of veggie options. Try veggie quesadilla. Figure 1: The proposed multi-level VAE with double latent variables (ml-VAE-D). both the words generated previously and from sentence-level representations. An input paragraph consist of M sentences, and each sentence t has Nt words, t=1,. . . , M. To generate the plan vectors, the model first samples a latent code z through a one-layer multi-layered perceptron (MLP), with ReLU activation functions, to obtain the starting state of the sentence-level LSTM decoder. Subsequent sentence representations, namely the plan vectors, are generated with the sentence-level LSTM in a sequential manner: hs t = LSTMsent(hs t−1, z), (2) The latent code z can be considered as a paragraph-level abstraction, relating to information about the semantics of each generated subsequence. Therefore we input z at each time step of the sentence-level LSTM, to predict the sentence representation. Our single-latent-variable model sketched in Figure 3 of supplementary material. The generated sentence-level plan vectors are then passed onto the word-level LSTM decoder to generate the words for each sentence. To generate each word of a sentence t, the corresponding plan vector, hs t, is concatenated with the word embedding of the previous word and fed to LSTMword at every time step 1. Let wt,i denote the i-th token of the t-th sentence This process can be expressed as (for t = 1, 2, ..., M and i = 1, 2, 3, ..., Nt): hw t,i = LSTMword(hw t,i−1, hs t, We[wt,i−1]), (3) p(wt,i|wt,<i, hs t) = softmax(V hw t,i), (4) The initial state hw t,0 of LSTMword is inferred from the corresponding plan vector via an MLP layer. V represents the weight matrix for computing distribution over words, and We are word embeddings to be learned. For each sentence, once the special END token is generated, the word-level 1We use teacher-forcing during training and greedy decoding at test time. LSTM stops decoding 2. LSTMword decoder parameters are shared for each generated sentence. 3.2 Double Latent Variables (ml-VAE-D): Similar architectures of our single latent variable ml-VAE-S model have been applied recently for multi-turn dialog response generation (Serban et al., 2017; Park et al., 2018), mainly focusing on short (one-sentence) response generation. Different from these works, our goal is to generate long text which introduces additional challenges to the hierarchical generative network. We hypothesize that with the two-level LSTM decoder embedded into the VAE framework, the load of capturing global and local semantics are handled differently than the flat-VAEs (Chen et al., 2016). While the multi-level LSTM decoder can capture relatively detailed information (e.g., word-level (local) coherence) via the word- and sentence-level LSTM networks, the latent codes of the VAE are encouraged to abstract more global and high-level semantic features of multiple sentences of long text. Our double latent variable extension, ml-VAED, is shown in Figure 1. The inference network encodes upward through each latent variable to infer their posterior distributions, while the generative network samples downward to obtain the distributions over the latent variables. The distribution of the latent variable at the bottom is inferred from the top-layer latent codes, rather than fixed (as in a standard VAE model). This also introduces flexibility to the model to abstract useful highlevel features (Gulrajani et al., 2016), which can then be leveraged by the multi-level LSTM network. Without loss of generality, here we choose to employ a two-layer hierarchy of latent variables, where the bottom and top layers are denoted as z1 and z2, respectively, which can be easily extended to multiple latent-variable layers. Another important advantage of multi-layer la2Each sentence is padded with an END token. 2082 tent variables in the VAE framework is related to the posterior collapse issue. Even though the single latent variable model (ml-VAE-S) defines a multi-level LSTM decoder, the posterior collapse can still exist since the LSTM decoder can still ignore the latent codes due to its autoregressive property. With the hierarchical latent variables, we propose a novel strategy to mitigate this problem, by making less restrictive assumptions regarding the prior distribution of the latent variable. Our model yields a larger KL loss term relative to flatVAEs, indicating more informative latent codes. The posterior distributions over the latent variables are assumed to be conditionally independent given the input x. We can represent the joint posterior distribution of the two latent variables as 3: qφ(z1, z2|x) = qφ(z2|x)qφ(z1|x) (5) Concerning the generative network, the latent variable at the bottom is sampled conditioned on the one at the top. Thus, we have: p✓(z1, z2) = p✓(z2)p✓(z1|z2) (6) DKL(qφ(z|x)||p(z)), the second term of the VAE objective, then becomes the KL divergence between joint posterior and prior distributions of the two latent variables. Under the assumptions of (5) and (6), the variational lower bound yields: Lvae = Eq(z1|x)[log p(x|z1)] −DKL(q(z1, z2|x)||p(z1, z2)) (7) Abbreviarting p✓and qφ with p and q, we get: DKL(q(z1, z2|x)||p(z1, z2)) = Z q(z2|x)q(z1|x) log q(z2|x)q(z1|x) p(z2)p(z1|z2) dz1dz2 = Z z1,z2 [qφ(z2|x)qφ(z1|x) log qφ(z1|x) p✓(z1|z2) + q(z2|x)q(z1|x) log q(z2|x) p(z2) ]dz1dz2 = Eq(z2|x)[DKL(q(z1|x)||p(z1|z2))] + DKL(q(z2|x)||p(z2)) (8) The left-hand side of (8) is the abbreviation of DKL(qφ(z1, z2|x)||p(z1, z2)). Given the Gaussian assumption for both the prior and posterior 3We assume z1 and z2 to be independent on the encoder side, since this specification will yield a closed-form expression for the KL loss between p✓(z1, z2) and qφ(z1, z2|x). distributions, both KL divergence terms can be written in closed-form. To abstract meaningful representations from the input paragraphs, we choose a hierarchical CNN architecture for the inference/encoder networks for both single and double latent variable models. We use sentence-level CNNs to encode each sentence into a fixed-length vector, which are then aggregated and send to paragraph-level CNN encoder. The inference networks parameterizing q(z1|x) and q(z2|x) share the same parameters as the lower-level CNN. The single-variable ml-VAE-S model feeds the paragraph feature vector into the linear layers to infer the mean and variance of the latent variable z. In the double-variable model ml-VAE-D, the feature vector is transformed with two MLP layers, and then is used to compute the mean and variance of the top-level latent variable. 4 Related Work VAE for text generation. VAEs trained under the neural variational inference (NVI) framework, has been widely used for generating text sequences: (Bowman et al., 2016; Yang et al., 2017; Semeniuta et al., 2017; Miao et al., 2016; Serban et al., 2017; Miao et al., 2017; Zhao et al., 2017; Shen et al., 2017; Guu et al., 2018; Kim et al., 2018; Yin et al., 2018; Kaiser et al., 2018; Bahuleyan et al., 2018; Chen et al., 2018b; Deng et al., 2018; Shah and Barber, 2018). By encouraging the latent feature space to match a prior distribution within an encoderdecoder architecture, the learned latent variable could potentially encode high-level semantic features and serve as a global representation during the decoding process (Bowman et al., 2016). The generated results are also endowed with better diversity due to the sampling procedure of the latent codes (Zhao et al., 2017). Generative Adversarial Networks (GANs) (Yu et al., 2017; Hu et al., 2017; Zhang et al., 2017; Fedus et al., 2018; Chen et al., 2018a), is another type of generative models that are commonly used for text generation. However, existing works have mostly focused on generating one sentence (or multiple sentences with at most twenty words in total). The task of generating relatively longer units of text has been less explored. Optimization Challenges. The “posterior collapse” issue associated with training text-VAEs was first outlined by (Bowman et al., 2016). They 2083 used two strategies, KL divergence annealing and word dropout, however, none of them help to improve the perplexity compared to a plain neural language model. (Yang et al., 2017) argue that the small KL term relates to the strong autoregressive nature of an LSTM generative network, and they proposed to utilize a dilated CNN as a decoder to improve the informativeness of the latent variable. (Zhao et al., 2018b) proposed to augment the VAE training objective with an additional mutual information term. This yields an intractable integral in the case where the latent variables are continuous. Recent work (He et al., 2019; Fu et al., 2019) has shown that advanced scheduling can mitigate the posterior collapse issue. We instead introduce more flexible priors and hierarchical encoder and decoder structures to deal with posterior collapse. Hierarchical Structures. Natural language is inherently hierarchical (characters form a word, words form a sentence, sentences form a paragraph, paragraphs from a document, etc.). Previous work used multi-level LSTM encoders (Yang et al., 2016) or hierarchical autoencoders (Li et al., 2015a) to learn hierarchical representations for long text or defined a stochastic latent variable for each sentence at decoding time (Serban et al., 2017). In contrast, our model encodes the entire paragraph into one single latent variable. The latent variable learned in our model relates more to the global semantic information of a paragraph, whereas those in (Serban et al., 2017) mainly contain the local information of a specific sentence. Park et al.(Park et al., 2018) introduced a variational hierarchical conversational model (VHCR) with global and local latent variables. They generate local/utterance variables condintioned on the global latent variable, assuming standard dialogcovariance Gaussian for both latent variables. In contrast, both our latent variables in ml-VAE-D are designed to contain global information. ml-VAE learns the prior of the bottom-level latent variable from the data, yielding more flexible prior relative to a fixed prior and promising results in mitigating the issue of “posterior collapse” in the experiments. The responses in VHCR are generated conditionally on the latent variables and context, while our ml-VAE-D model captures the underlying data distribution of the entire paragraph in the bottom latent variable (z1), so the global latent variable contains more information. 5 Experiments 5.1 Experimental Setup Datasets We conducted experiments on both generic (unconditional) long-form text generation and conditional paragraph generation (with additional text input as auxiliary information). For the former, we use two datasets: Yelp Reviews (Zhang et al., 2015) and arXiv Abstracts. For the conditional-generation experiments, we consider the task of synthesizing a paper abstract conditioned on the paper title (with the arXiv Abstracts dataset)4. Details on dataset statistics and model architectures are provided in the supplementary material. Baselines We implement the following langauge modeling baselines: language model with a flat LSTM decoder (flat-LM), VAE with a flat LSTM decoder (flat-VAE), and language model with a multi-level LSTM decoder (ml-LM)5. For generic text generation, we build models using two recently proposed generative models as baselines: Adversarial Autoencoders (AAE) (Makhzani et al., 2015) and AdversariallyRegularized Autoencoders (ARAE) (Zhao et al., 2018a). Instead of penalizing the KL divergence term, AAE introduces a discriminator network to match the prior and posterior distributions of the latent variable. AARE model extends AAE by introducing Wassertein GAN loss (Arjovsky et al., 2017) and a stronger generator network. We build two variants of our multi-level VAE models: single latent variable ml-VAE-S and double latent variable ml-VAE-D. Our code will be released to encourage future research. 5.2 Language Modeling Results We report negative log likelihood (NLL) and perplexity (PPL) results on Yelp and arXiv datasets. Following (Bowman et al., 2016; Yang et al., 2017; Kim et al., 2018), we use the KL loss term to measure the extent of “posterior collapse”. 4Our goal is to analyze if the proposed architecture can discover different concepts with the hierarchical decoding and latent code structures, thus we use the arxiv dataset with indicated domains for demonstration purposes. We leave the common summarization datasets for future research. 5We only experimented with state of the art models with similar architectures to our models, since our goal is to investigate the impact of hiararhical VAE structure on the text generation. More efficient new encoder and decoder architectures such as non-autoregressive models is a direction for extending this work. 2084 Model Yelp arXiv NLL KL PPL NLL KL PPL flat-LM 162.6 48.0 218.7 57.6 flat-VAE 163.1 0.01 49.2 219.5 0.01 58.4 ml-LM 162.4 47.9 219.3 58.1 ml-VAE-S 160.8 3.6 46.6 216.8 5.3 55.6 ml-VAE-D 160.2 6.8 45.8 215.6 12.7 54.3 Table 2: Language modeling results on Yelp and arXiv data. Upper block are baselines, and lower are our models. As shown in Table 2, the standard flat-VAE on Yelp dataset yields a KL divergence term very close to zero, indicating that the generative model makes negligible use of the information from latent variable z. The flat-VAE model obtains slightly worse NNL and PPL relative to a flat LSTM-based language model. With multi-level LSTM decoder, our ml-VAE-S yields increased KL divergence, demonstrating that the VAE model tends to leverage more information from the latent variable in the decoding stage. The PPL of mlVAE-S is also decreased from 47.9 to 46.6 (compared to ml-LM), indicating that the sampled latent codes improve word-level predictions. Our double latent variable model, ml-VAE-D, exhibits an even larger KL divergence cost term (increased from 3.6 to 6.8) than single latent variable model, indicating that more information from the latent variable has been utilized by the generative network. This may be due to the fact that the latent variable priors of the ml-VAE-D model are inferred from the data, rather than a fixed standard Gaussian distribution. As a result, the model is endowed with more flexibility to encode informative semantic features in the latent variables, yet matching their posterior distributions to the corresponding priors. ml-VAE-D achieves the best PPL results on both datasets (on the arXiv dataset, our hierarchical decoder outperforms the ml-LM by reducing the PPL from 58.1 down to 54.3). 5.3 Unconditional Text Generation We evaluate the quality of generated paragraphs as follows. We randomly sample 1000 latent codes and send them to all trained generative models to generate text. We use corpus-level BLEU score (Papineni et al., 2002) to quantitatively evaluate the generated paragraphs. Following strategy in (Yu et al., 2017; Zhang et al., 2017) we use the entire test set as the reference for each generated Figure 2: t-SNE visualization of the learned latent codes. text, and get the average BLEU scores6 over 1000 generated sentences for each model. The results are in Table 3. VAE tends to be a stronger baseline for paragraph generation, exhibiting higher corpus-level BLEU scores than both AAE and ARAE. This observation is consistent with the results in (C´ıfka et al., 2018) in Table 3. The VAE with multi-level decoder demonstrates better BLEU scores than the one with a flat decoder, indicating that the plan-ahead mechanism associated with the hierarchical decoding process indeed benefits the sampling quality. ml-VAE-D exhibits slightly better results than ml-VAE-S. We attribute this to the more flexible prior distribution of ml-VAE-D, which improves the ability of inference networks to extract semantic features from a paragraph, yielding more informative latent codes. We visualize the learnt latent variables to analyze if our models can extract global features. Using the arXiv dataset, we select the most frequent four article topics and re-train our ml-VAED model on the corresponding abstracts in an unsupervised way (no topic information is used). We sample bottom-level latent codes from the learned model and plot them with t-SNE in Figure 2. Each point indicates one paper abstract and the color of each point indicates the topic it belongs to. The embeddings of the same label are very close in the 2-D plot, while those with different labels are relatively farther away from each other. The embeddings of the High Energy Physics and Nuclear topic abstracts are meshed, which is expected since these two topics are semantically highly related. The inference network can extract meaningful global patterns from the input paragraph. In Table 1 two samples of generations from flatVAE and ml-VAE-D are shown. Compared to our hierarchical model, a flat decoder with a flat VAE 6Being interested in longer text generation, we evaluate our models on the n-gram reconscturion ability (where n>1). 2085 Model Yelp arXiv B-2 B-3 B-4 B-2 B-3 B-4 ARAE 0.684 0.524 0.350 0.624 0.475 0.305 AAE 0.735 0.623 0.383 0.729 0.564 0.342 flat-VAE 0.855 0.705 0.515 0.784 0.625 0.421 ml-VAE-S 0.901 0.744 0.531 0.821 0.663 0.447 ml-VAE-D 0.912 0.755 0.549 0.825 0.657 0.460 Table 3: Evaluation results for generated sequences by our models and baselines on corpus-level BLEU scores (B-n denotes the corpus-level BLEU-n score.) exibits repetitions as well as suffers from uninformative sentences. The hierarchical model generates reviews that contain more information with less repetitions (word or semantic semantic repetitions), and tend to be semantically-coherent. Diversity of Generated Paragraphs We evaluate the diversity of random samples from a trained model, since one model might generate realisticlooking sentences while suffering from severe mode collapse (i.e., low diversity). We use three metrics to measure the diversity of generated paragraphs: Self-BLEU scores (Zhu et al., 2018), unique n-grams (Fedus et al., 2018) and the entropy score (Zhang et al., 2018). For a set of sampled sentences, the Self-BLEU metric is the BLEU score of each sample with respect to all other samples as the reference (the numbers over all samples are then averaged); the unique score computes the percentage of unique n-grams within all the generated reviews; and the entropy score measures how evenly the empirical n-gram distribution is for a given sentence, which does not depend on the size of testing data, as opposed to unique scores. In all three metrics, lower is better. We randomly sample 1000 reviews from each model. The results are shown in Table 5. A small selfBLEU score together with a large BLEU score can justify the effectiveness of a model, i.e., being able to generate realistic-looking as well as diverse samples. Among all the VAE variants, ml-VAE-D shows the smallest BLEU score and largest unique n-grams percentage, demonstrating the effectiveness of hieararhically structured generative networks as well as latent variables. Even though AAE and ARAE yield better diversity according to both metrics, their corpus-level BLEU scores are much worse relative to ml-VAE-D. We leverage human evaluation for further comparison. we study the effect of disorder on the dynamics of a twodimensional electron gas in a two-dimensional optical lattice , we show that the superfluid phase is a phase transition , we also show that , in the presence of a magnetic field , the vortex density is strongly enhanced . in this work we study the dynamics of a colloidal suspension of frictionless , the capillary forces are driven by the UNK UNK , when the substrate is a thin film , the system is driven by a periodic potential, we also study the dynamics of the interface between the two different types of particles . Table 4: Generated arXiv abstracts from ml-VAE-D model. Model Yelp B-2 B-3 B-4 2gr 3gr 4gr Etp-2 ARAE 0.725 0.544 0.402 36.2 59.7 75.8 7.551 AAE 0.831 0.672 0.483 33.2 57.5 71.4 6.767 flat-VAE 0.872 0.755 0.617 23.7 48.2 69.0 6.793 ml-VAE-S 0.865 0.734 0.591 28.7 50.4 70.7 6.843 ml-VAE-D 0.851 0.723 0.579 30.5 53.2 72.6 6.926 Table 5: Evaluation of diversity of 1000 generated sentences on self-BLEU scores (B-n), unique n-gram percentages (ngr), 2-gram entropy score. Human Evaluation We conducted human evaluations using Amazon Mechanical Turk to assess the coherence and non-redundancy properties of our proposed models. Given a pair of generated reviews, the judges are asked to select their preferences (no difference between the two reviews is also an option) according to the following four evaluation criteria: fluency & grammar, consistency, non-redundancy, and overall. We compare generated text from our ml-VAE-D againts flatVAE, AAE and real samples from the test set. Details are provided in the supplementary material. As shown in Table 7, ml-VAE generates superior human-looking samples compared to flat-VAE on the Yelp Reviews dataset. Even though both models underperform when compared against the ground-truth real reviews, ml-VAE was rated higher in comparison to flat-VAE (raters find mlVAE closer to human-generated than the flat-VAE) in all the criteria evaluation criteria. When compared against AAE baseline models using the same data preprocessing steps and hyperparameters, ml-VAE again produces more grammaticallycorrect and semantically-coherent samples. The human evaluations correlate with the automatic metrics, which indicate that our ml-VAE is ac2086 Title: Magnetic quantum phase transitions of the antiferromagnetic - Heisenberg model We study the phase diagram of the model in the presence of a magnetic field, The model is based on the action of the Polyakov loop, We show that the model is consistent with the results of the first order perturbation theory. Title: Kalman Filtering With UNK Over Wireless UNK Channels The Kalman filter is a powerful tool for the analysis of quantum information, which is a key component of quantum information processing, However, the efficiency of the proposed scheme is not well understood . Table 6: Conditionally generated arXiv paper abstracts from ml-VAE-D model based on a given title. Model Grammar. Cons. Non-Red. Overall ml-VAE 52.0 55.0 53.7 60.0 flat-VAE 30.0 33.0 27.7 32.3 ml-VAE 75.3 86.0 76.7 86.0 AAE 13.3 10.3 15.0 12.0 flat-VAE 19.7 18.7 14.3 19.0 Real data 61.7 74.7 74.3 77.7 ml-VAE 28.0 26.3 25.0 30.3 Real data 48.6 58.7 49.0 61.3 Table 7: Human evaluations on Yelp Reviews dataset. Each block is a head-to-head comparison of two models on grammatically, consistency, and nonredundancy. tually generating more coherent stories than the baseline models. We leave further evaluations using embedding based metrics as a possible extension to our work. 5.4 Conditional Paragraph Generation We consider the task of generating an abstract of a paper based on the corresponding title. The same arXiv dataset is utilized, where when training the title and abstract are given as paired text sequences. The title is used as input of the inference network. For the generative network, instead of reconstructing the same input (i.e., title), the paper abstract is employed as the target for decoding. We compare the ml-VAE-D model against ml-LM. We observe that the ml-VAE-D model achieves a test perplexity of 55.7 (with a KL term of 2.57), smaller that the test perplexity of ml-LM (58.1), indicating that the information from the title is used by the generative network to facilitate the decoding process. Generated abstract samples from ml-VAE-D model are shown in Table 6. A the service was great, the receptionist was very friendly and the place was clean, we waited for a while, and then our room was ready . • same with all the other reviews, this place is a good place to eat, i came here with a group of friends for a birthday dinner, we were hungry and decided to try it, we were seated promptly. • this place is a little bit of a drive from the strip, my husband and i were looking for a place to eat, all the food was good, the only thing i didn t like was the sweet potato fries. • this is not a good place to go, the guy at the front desk was rude and unprofessional, it s a very small room, and the place was not clean. • service was poor, the food is terrible, when i asked for a refill on my drink, no one even acknowledged me, they are so rude and unprofessional. B how is this place still in business, the staff is rude, no one knows what they are doing, they lost my business . Table 8: Intermediate sentences are produced from linear transition between two points in the latent space and sending them to the generator network. 5.5 Analysis The Continuity of Latent Space Following (Bowman et al., 2016), we measure the continuity of the learned latent space. We randomly sample two points from the prior latent space (denoted as A and B) and generate sentences based on the equidistant intermediate points along the linear trajectory between A and B. As shown in Table 8, these intermediate samples are all realisticlooking reviews that are syntactically and semantically reasonable, demonstrating the smoothness of the learned VAE latent space. Interestingly, we even observe that the generated sentences gradually transit from positive to negative sentiment along the linear trajectory. To validate that the sentences are not generated by simply retrieving the training data, we find the closest instance, among the entire training set, for each generated review. Details of the results can be found in the supplementary material (Table 12). Attribute Vector Arithmetic We conduct an experiment to alter the sentiments of reviews with an attribute vector. We encode the reviews of the Yelp Review training dataset with positive sentiment and sample a latent code for each review and measure the mean latent vector. The mean latent vector of the negative reviews are computed in the same way. We subtract the negative mean vector from the positive mean vector to obtain the “sentiment attribute vector”. Next, for evaluation, we randomly sample 1000 reviews with negative sen2087 Original: you have no idea how badly i want to like this place, they are incredibly vegetarian vegan friendly , i just haven t been impressed by anything i ve ordered there , even the chips and salsa aren t terribly good , i do like the bar they have great sangria but that s about it . Transferred: this is definitely one of my favorite places to eat in vegas , they are very friendly and the food is always fresh, i highly recommend the pork belly , everything else is also very delicious, i do like the fact that they have a great selection of salads . Table 9: An example sentiment transfer result with attribute vector arithmetic. More examples can be found in the supplementary material (Table 13). timent and add the “sentiment attribute vector” to their latent codes. The manipulated latent vectors are then fed to the hierarchical decoder to produce the transferred sentences, hypothesizing that they will convey positive sentiment. As shown in Table 9, the original sentences have been successfully manipulated to positive sentiment with the simple attribute vector operation. However, the specific contents of the reviews are not fully retained. One interesting future direction is to decouple the style and content of long-form texts to allow content-preserving attribute manipulation. We employed a CNN sentiment classifier to evaluate the sentiment of manipulated sentences. The classifier is trained on the entire training set and achieves a test accuracy of 94.2%. With this pre-trained classifier, 83.4% of the transferred reviews are predicted as positive-sentiment, indicating that “attribute vector arithmetic” consistently produces the intended manipulation of sentiment. 6 Conclusion We introduce a hierarchically-structured variational autoencoder for long text generation. It consists of a multi-level LSTM generative network to model the semantic coherence at both the wordand sentence-levels. A hierarchy of stochastic layers is employed, where the priors of the latent variables are learned from the data. Consequently, more informative latent codes are manifested, and the generated samples also exhibit superior quality relative to those from several baseline methods. References Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zhang. 2016. Tensorflow: A system for largescale machine learning. In OSDI. Martin Arjovsky, Soumith Chintala, and Lon Bottou. 2017. Wasserstein gan. arXiv preprint arXiv:1701.07875. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. in international conference on learning representations. ICLR. Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, and Pascal Poupart. 2018. Variational attention for sequence-to-sequence models. In COLING. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL. Liqun Chen, Shuyang Dai, Chenyang Tao, Haichao Zhang, Zhe Gan, Dinghan Shen, Yizhe Zhang, Guoyin Wang, Ruiyi Zhang, and Lawrence Carin. 2018a. Adversarial text generation via featuremover’s distance. In Advances in Neural Information Processing Systems, pages 4671–4682. Mingda Chen, Qingming Tang, Karen Livescu, and Kevin Gimpel. 2018b. Variational sequential labelers for semi-supervised learning. In EMNLP. Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. NAACL. Ondˇrej C´ıfka, Aliaksei Severyn, Enrique Alfonseca, and Katja Filippova. 2018. Eval all, trust a few, do wrong to none: Comparing sentence generation models. arXiv preprint arXiv:1804.07972. Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander M. Rush. 2018. Latent alignment and variational attention. CoRR, abs/1807.03756. Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, and Xuan Wang. 2018. Variational autoregressive decoder for neural response generation. In EMNLP. William Fedus, Ian Goodfellow, and Andrew M Dai. 2018. Maskgan: Better text generation via filling in the . arXiv preprint arXiv:1801.07736. 2088 Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. 2019. Cyclical annealing schedule: A simple approach to mitigate kl vanishing. NAACL. Anirudh Goyal, Alessandro Sordoni, Marc-Alexandre Cˆot´e, Nan Rosemary Ke, and Yoshua Bengio. 2017. Z-forcing: Training stochastic recurrent networks. In NIPS. Xiaodong Gu, Kyunghyun Cho, Jungwoo Ha, and Sunghun Kim. 2018. Dialogwae: Multimodal response generation with conditional wasserstein auto-encoder. arXiv preprint arXiv:1805.12352. Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron Courville. 2016. Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. TACL, 6:437–450. Tatsunori B Hashimoto, Kelvin Guu, Yonatan Oren, and Percy S Liang. 2018. A retrieve-and-edit framework for predicting structured outputs. In Advances in Neural Information Processing Systems, pages 10052–10062. Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. ICLR. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. ACL. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In ICML. Lukasz Kaiser, Aurko Roy, Ashish Vaswani, Niki Parmar, Samy Bengio, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In ICML. Yoon Kim, Sam Wiseman, Andrew C. Miller, David A Sontag, and Alexander M. Rush. 2018. Semiamortized variational autoencoders. In ICML. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Jiwei Li, Minh-Thang Luong, and Daniel Jurafsky. 2015a. A hierarchical neural autoencoder for paragraphs and documents. In ACL. Jiwei Li, Thang Luong, and Dan Jurafsky. 2015b. A hierarchical neural autoencoder for paragraphs and documents. In ACL, volume 1, pages 1106–1115. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. 2015. Adversarial autoencoder. CoRR. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In ICML. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In ICML, pages 1727–1736. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational conversation modeling. In NAACL-HLT. Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne, and Douglas Eck. 2018. A hierarchical latent vector model for learning long-term structure in music. arXiv preprint arXiv:1803.05428. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. In EMNLP. Iulian Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI. Dmitriy Serdyuk, Nan Rosemary Ke, Alessandro Sordoni, Christopher Joseph Pal, and Yoshua Bengio. 2017. Twin networks: Using the future as a regularizer. CoRR, abs/1708.06742. Harshil Shah and David Barber. 2018. Generative neural machine translation. arXiv preprint arXiv:1806.05138. Dinghan Shen, Qinliang Su, Paidamoyo Chapfuwa, Wenlin Wang, Guoyin Wang, Lawrence Carin, and Ricardo Henao. 2018a. Nash: Toward end-to-end neural architecture for generative semantic hashing. In ACL. Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, and Lawrence Carin. 2018b. Deconvolutional latent-variable model for text sequence matching. In AAAI. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dialog generation. In ACL. 2089 Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Yingxu Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Topic-guided variational autoencoders for text generation. NAACL. Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In EMNLP. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In ICML. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Pengcheng Yin, Chunting Zhou, Junxian He, and Graham Neubig. 2018. Structvae: Tree-structured latent variable models for semi-supervised semantic parsing. In ACL. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS, pages 649–657. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In NIPS. Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial feature matching for text generation. In ICML. Junbo Jake Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann LeCun. 2018a. Adversarially regularized autoencoders. In ICML. Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018b. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. arXiv preprint arXiv:1804.08069. Tiancheng Zhao, Ran Zhao, and Maxine Esk´enazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. arXiv preprint arXiv:1802.01886.
2019
200
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2090–2101 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2090 Jointly Learning Semantic Parser and Natural Language Generator via Dual Information Maximization Hai Ye♭ Wenjie Li♮ Lu Wang♯ ♭Information Sciences Institute, University of Southern California ♮Department of Computing, The Hong Kong Polytechnic University ♯College of Computer and Information Science, Northeastern University [email protected], [email protected], [email protected] Abstract Semantic parsing aims to transform natural language (NL) utterances into formal meaning representations (MRs), whereas an NL generator achieves the reverse: producing a NL description for some given MRs. Despite this intrinsic connection, the two tasks are often studied separately in prior work. In this paper, we model the duality of these two tasks via a joint learning framework, and demonstrate its effectiveness of boosting the performance on both tasks. Concretely, we propose the method of dual information maximization (DIM) to regularize the learning process, where DIM empirically maximizes the variational lower bounds of expected joint distributions of NL and MRs. We further extend DIM to a semisupervision setup (SEMIDIM), which leverages unlabeled data of both tasks. Experiments on three datasets of dialogue management and code generation (and summarization) show that performance on both semantic parsing and NL generation can be consistently improved by DIM, in both supervised and semi-supervised setups1. 1 Introduction Semantic parsing studies the task of translating natural language (NL) utterances into formal meaning representations (MRs) (Zelle and Mooney, 1996; Tang and Mooney, 2000). NL generation models can be designed to learn the reverse: mapping MRs to their NL descriptions (Wong and Mooney, 2007). Generally speaking, MR often takes a logical form that captures the semantic meaning, including λcalculus (Zettlemoyer and Collins, 2005, 2007), Abstract Meaning Representation (AMR) (Banarescu et al., 2013; Misra and Artzi, 2016), and general-purpose computer programs, such as 1Code for this paper is available at: https:// github.com/oceanypt/DIM x ỹ  pθ → (x,y) = p(x) (y|x) pe pθ y x̃  qϕ −→ (x,y) = q(y) (x|y) pd qϕ x y ⟷ (x,y) Figure 1: Illustration of our joint learning model. x: NL; y: MRs. pθ(y|x): semantic parser; qφ(x|y): NL generator. We model the duality of the two tasks by matching the joint distributions of pe(x, y) (learned from semantic parser) and pd(x, y) (learned from NL generator) to an underlying unknown distribution P(x, y). Python (Yin and Neubig, 2017) or SQL (Zhong et al., 2017). Recently, NL generation models have been proposed to automatically construct humanreadable descriptions from MRs, for code summarization (Hu et al., 2018; Allamanis et al., 2016; Iyer et al., 2016) that predicts the function of code snippets, and for AMR-to-text generation (Song et al., 2018; Konstas et al., 2017; Flanigan et al., 2016). Specifically, a common objective that semantic parsers aim to estimate is pθ(y|x), the conditional distribution between NL input x and the corresponding MR output y, as demonstrated in Fig. 1. Similarly, for NL generation from MRs, the goal is to learn a generator of qφ(x|y). As demonstrated in Fig. 2, there is a clear duality between the two tasks, given that one task’s input is the other task’s output, and vice versa. However, such duality remains largely unstudied, even though joint modeling has been demonstrated effective in various NLP problems, e.g. question answering and generation (Tang et al., 2017), machine translation between paired languages (He et al., 2016), as well as sentiment prediction and subjective text gener2091 1 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 DATA EXAMPLE Ave. Token ATIS x can you list all flights from chicago to milwaukee 10.6 y ( lambda $0 e ( and ( flight $0) ( from $0 chicago: ci) ( to $0 milwaukee: ci) ) ) 26.5 DJANGO x convert max entries into a string, substitute it for self. max entries. 11.9 y self. max entries = int(max entries) 8.2 CONALA x more pythonic alternative for getting a value in range not using min and max 9.7 y a = 1 if x < 1 else 10 if x > 10 else x 14.1 Figure 2: Sample natural language utterances and meaning representations from datasets used in this work: ATIS for dialogue management; DJANGO (Oda et al., 2015) and CONALA (Yin et al., 2018a) for code generation and summarization. ation (Xia et al., 2017). In this paper, we propose to jointly model semantic parsing and NL generation by exploiting the interaction between the two tasks. Following previous work on dual learning (Xia et al., 2017), we leverage the joint distribution P(x, y) of NL and MR to represent the duality. Intuitively, as shown in Fig. 1, the joint distributions of pe(x, y) = p(x)pθ(y|x), which is estimated from semantic parser, and pd(x, y) = q(y)qφ(x|y), which is modeled by NL generator, are both expected to approximate P(x, y), the unknown joint distribution of NL and MR. To achieve this goal, we propose dual information maximization (DIM) (§3) to empirically optimize the variational lower bounds of the expected joint distributions of pe(x, y) and pd(x, y). Concretely, the coupling of the two expected distributions is designed to capture the dual information, with both optimized via variational approximation (Barber and Agakov, 2003) inspired by Zhang et al. (2018). Furthermore, combined with the supervised learning objectives of semantic parsing and NL generation, DIM bridges the two tasks within one joint learning framework by serving as a regularization term (§2.2). Finally, we extend supervised DIM to semi-supervision setup (SEMIDIM), where unsupervised learning objectives based on unlabeled data are also optimized (§3.3). We experiment with three datasets from two different domains: ATIS for dialogue management; DJANGO and CONALA for code generation and summarization. Experimental results show that both the semantic parser and generator can be consistently improved with joint learning using DIM and SEMIDIM, compared to competitive comparison models trained for each task separately. Overall, we have the following contributions in this work: • We are the first to jointly study semantic parsing and natural language generation by exploiting the duality between the two tasks; • We propose DIM to capture the duality and adopt variational approximation to maximize the dual information; • We further extend supervised DIM to semisupervised setup (SEMIDIM). 2 Problem Formulation 2.1 Semantic Parsing and NL Generation Formally, the task of semantic parsing is to map the input of NL utterances x to the output of structured MRs y, and NL generation learns to generate NL from MRs. Learning Objective. Given a labeled dataset L = {⟨xi, yi⟩}, we aim to learn a semantic parser (x →y) by estimating the conditional distribution pθ(y|x), parameterized by θ, and an NL generator (y →x) by modeling qφ(x|y), parameterized by φ. The learning objective for each task is shown below: Lparser = E⟨x,y⟩[log pθ(y|x)] (1) Lgen. = E⟨x,y⟩[log qφ(x|y)] (2) Frameworks. Sequence-to-sequence (seq2seq) models have achieved competitive results on both semantic parsing and generation (Dong and Lapata, 2016; Hu et al., 2018), and without loss of generality, we adopt it as the basic framework for both tasks in this work. Specifically, for both pθ(y|x) and qφ(x|y), we use a two-layer bidirectional LSTM (bi-LSTM) as the encoder and another one-layer LSTM as the decoder with attention mechanism (Luong et al., 2015). Furthermore, we leverage pointer network (Vinyals et al., 2015) to copy tokens from the input to handle outof-vocabulary (OOV) words. The structured MRs are linearized for the sequential encoder and de2092 coder. More details of the parser and the generator can be found in Appendix A. Briefly speaking, our models differ from existing work as follows: PARSER: Our architecture is similar to the one proposed in Jia and Liang (2016) for semantic parsing; GENERATOR: Our model improves upon the DEEPCOM coder summarization system (Hu et al., 2018) by: 1) replacing LSTM with biLSTM for the encoder to better model context, and 2) adding copying mechanism. 2.2 Jointly Learning Parser and Generator Our joint learning framework is designed to model the duality between a parser and a generator. To incorporate the duality into our learning process, we design the framework to encourage the expected joint distributions pe(x, y) and pd(x, y) to both approximate the unknown joint distribution of x and y (shown in Fig. 1). To achieve this, we introduce dual information maximization (DIM) to empirically optimize the variational lower bounds of both pe(x, y) and pd(x, y), in which the coupling of expected distributions is captured as dual information (detailed in §3.1) and will be maximized during learning. Our joint learning objective takes the form of: max θ,φ L(θ, φ) = Lparser+Lgen.+λ·LDIM(θ, φ) (3) LDIM is the variational lower bound of the two expected joint distributions, specifically, LDIM = Le DIM (Eq. 6) + Ld DIM (Eq. 9) (4) where Le DIM and Ld DIM are the lower bounds over pe(x, y) and pd(x, y) respectively. The hyperparameter λ trades off between supervised objectives and dual information learning. With the objective of Eq. 3, we jointly learn a parser and a generator, as well as maximize the dual information between the two. LDIM serves as a regularization term to influence the learning process, whose detailed algorithm is described in §3. Our method of DIM is model-independent. If the learning objectives for semantic parser and NL generator are subject to Eq. 1 and Eq. 2, we can always adopt DIM to conduct joint learning. Out of most commonly used seq2seq models for the parser and generator, more complex tree and graph structures have been adopted to model MRs (Dong and Lapata, 2016; Song et al., 2018). In this paper, without loss of generality, we study our joint-learning method on the widely-used seq2seq frameworks mentioned above (§2.1). 3 Dual Information Maximization In this section, we first introduce dual information in §3.1, followed by its maximization (§3.2). §3.3 discusses its extension with semi-supervision. 3.1 Dual Information As discussed above, we treat semantic parsing and NL generation as the dual tasks and exploit the duality between the two tasks for our joint learning. With conditional distributions pθ(y|x) for the parser and qφ(x|y) for the generator, the joint distributions of pe(x, y) and pd(x, y) can be estimated as pe(x, y) = p(x)pθ(y|x) and pd(x, y) = q(y)qφ(x|y), where p(x) and q(y) are marginals. The dual information Ipe,d(x,y) between the two distributions is defined as follows: Ipe,d(x,y) = Ipe(x,y) + Ipd(x,y) ≜ Epe(x,y) log pe(x,y) + Epd(x,y) log pd(x, y) (5) which is the combination of the two joint distribution expectations. To leverage the duality between the two tasks, we aim to drive the learning of the model parameters θ and φ via optimizing Ipe,d(x,y), so that the expectations of joint distributions pe(x, y) and pd(x, y) will be both maximized and approximate the latent joint distribution P(x, y), whose procedure is similar to the joint distribution matching (Gan et al., 2017). By exploiting the inherent probabilistic connection between the two distributions, we hypothesize that it would enhance the learning of both tasks on parsing pθ(y|x) and generation qφ(x|y). Besides, to approach the same distribution P(x, y), the expected joint distributions can learn to be close to each other, making the dual models coupled. 3.2 Maximizing Dual Information Here, we present the method for optimizing Ipe(x,y), which can also be applied to Ipd(x,y). In contrast to the parameter sharing techniques in most multi-task learning work (Collobert et al., 2011; Ando and Zhang, 2005), parameter θ for the parser and parameter φ for generator are independent in our framework. In order to jointly train the two models and bridge the learning of θ and φ, during the optimization of Ipe(x,y), where the parser is the primal model, we utilize the distributions of the dual task (i.e. the generator) to estimate Ipe(x,y). In this way, θ and φ can 2093 is  there  ground transportation in st. louis   pθ ŷ 1 ŷ n log (x| )q( ) qϕ ŷ  ŷ  ⋯⋯ ( ( + e DIM \_lambda  \$0  e  (  \_and  ( \_ground\_transport \$0 ) ...   qϕ x̂ 1 x̂ n log (y| )p( ) pθ x̂  x̂  ⋯⋯ ( ( + d DIM ⋯ ⋯ ⋯ ⋯ + DIM x y 1 n 1 n parsing − → −−−− generation − → −−−−−− Figure 3: The pipeline of calculating lower bounds. We firstly use the parser or generator to sample MR or NL targets, then the sampled candidates go through the dual model and a language model to obtain the lower bounds. be both improved during the update of Ipe(x,y). Specifically, we rewrite Epe(x,y) log pe(x, y) as Epe(x,y) log pe(y)pe(x|y), where pe(y) and pe(x|y) are referred as the dual task distributions. However, the direct optimization for this objective is impractical since both pe(y) and pe(x|y) are unknown. Our solution is detailed below. Lower Bounds of Dual Information. To provide a principled approach of optimizing Ipe(x,y), inspired by Zhang et al. (2018), we follow Barber and Agakov (2003) to adopt variational approximation to deduce its lower bound and instead maximize the lower bound. The lower bound deduction process is as following: Epe(x,y) log pe(x, y) = Epe(x,y) log pe(x|y)pe(y) = Epe(x,y) log qφ(x|y) + Epe(x,y) log q(y) +Epe(y)  KL(pe(x|y)∥qφ(x|y))  +Epe(x|y)  KL(pe(y)∥q(y))  ⩾Epe(x,y)  log qφ(x|y) + log q(y)  = Le DIM(θ, φ) (6) where KL(·∥·)(⩾0) is the Kullback-Leibler (KL) divergence. Therefore, to maximize Ipe(x,y), we can instead maximize its lower bound of Le DIM. Le DIM is learned by using qφ(x|y) and q(y) which approximate pe(x|y) and pe(y). Besides, the lower bound of Le DIM is the function of θ and φ, so in the process of learning Le DIM, the parser and generator can be both optimized. As illustrated in Fig. 3, in the training process, to calculate the lower bound of Le DIM, we first use the being-trained parser to sample MR candidates for a given NL utterance. The sampled MRs then go through the generator and a marginal model (i.e., a language model of MRs) to obtain the final lower bound. To learn the lower bound of Le DIM, we provide the following method to calculate its gradients: Gradient Estimation. We adopt Monte Carlo samples using the REINFORCE policy (Williams, 1992) to approximate the gradient of Le DIM(θ, φ) with regard to θ: ∇θLe DIM(θ, φ) = Epθ(y|x)∇θ log pθ(y|x) · [log qφ(x|y) + log q(y) −b] = Epθ(y|x)∇θ log pθ(y|x) · l(x, y; φ) ≈1 |S| X ˆyi∈S ∇θ log pθ(ˆyi|x) · l(x, ˆyi; φ) (7) l(x, y; φ) can be seen as the learning signal from the dual model, which is similar to the reward in reinforcement learning algorithms (Guu et al., 2017; Paulus et al., 2017). To handle the highvariance of learning signals, we adopt the baseline function b by empirically averaging the signals to stabilize the learning process (Williams, 1992). With prior pθ(·|x), we use beam search to generate a pool of MR candidates (y), denoted as S, for the input of x. The gradient with regard to φ is then calculated as: ∇φLe DIM(θ, φ) = Epθ(y|x)∇φ log qφ(x|y) ≈1 |S| X ˆyi∈S ∇φ log qφ(x|ˆyi) (8) The above maximization procedure for Le DIM is analogous to the EM algorithm: Step 1: Freeze φ and find the optimal θ∗= arg maxθ Le DIM(θ, φ) with Eq. 7; Step 2: Based on Eq. 8, with freezing θ∗, find the optimal φ∗= arg maxφ Le DIM(θ, φ). The two steps are repeated until convergence. According to the gradient estimation in Eq. 7, when updating θ for the parser, we receive the learning signal l(x, y; φ) from the generator, and this learning signal can be seen as a reward from the generator: if parser pθ(y|x) predicts high2094 quality MRs, the reward will be high; otherwise, the reward is low. This implies that the generator guides the parser to generate high-quality MRs, through which the lower bound for the expected joint distribution gets optimized. This also applies to the situation when we treat the generator as the primal model and the parser as the dual model. The lower bound of Ipd(x,y) can be calculated in a similar way: Epd(x,y) log pd(x, y) ⩾Epd(x,y)  log pθ(y|x)+log p(x)  = Ld DIM(θ, φ) (9) which can be optimized the same way as in Eqs. 7 and 8 for estimating the gradients for Ld DIM. Marginal Distributions. To obtain the marginal distributions p(x) and q(y), we separately train an LSTM-based language model (Mikolov et al., 2010) for NL and MR respectively, on each training set. Structured MRs are linearized into sequences for the sequential encoder and decoder in seq2seq models. Details on learning marginal distributions can be found in Appendix B. Joint Learning Objective. Our final joint learning objective becomes: max θ,φ J = X ⟨x,y⟩∈L  log pθ(y|x) + log qφ(x|y) + λ X ˆyi∼pθ(·|x) log qφ(x|ˆyi) + log q(ˆyi) + λ X ˆxi∼qφ(·|y) log pθ(y|ˆxi) + log p(ˆxi)  (10) According to this learning objective, after picking up a data pair ⟨x, y⟩, we will firstly calculate the supervised learning loss, then we sample MR candidates and NL samples using prior pθ(·|x) and qφ(·|y) respectively to obtain the corresponding lower bounds over Ipe(x,y) and Ipd(x,y). 3.3 Semi-supervised DIM (SEMIDIM) We further extend DIM with semi-supervised learning. We denote the unlabeled NL dataset as Ux = {xi} and the unlabeled MR dataset as Uy = {yi}. To leverage Ux, we maximize the unlabeled objective Ex∼Ux log p(x). Our goal is to involve model parameters in the optimization process of Ex∼Ux log p(x), so that the unlabeled data can facilitate parameter leanring. Lower Bounds of Unsupervised Objective. The lower bound of Ex∼Ux log p(x) is as follows, using the deduction in Ineq. 6: Ex∼Ux log p(x) ≥Ex∼Ux,y∼pθ(·|x) log p(x)pθ(y|x) ≥Ex∼Ux,y∼pθ(·|x)  log qφ(x|y) + q(y)  (11) Comparing Ineq. 11 to Ineq. 6, we can see that the unsupervised objective Ex∼Ux log p(x) and Ipe(x,y) share the same lower bound, so that the same optimization method from Eq. 7 and Eq. 8 can be utilized for learning the lower bound over Ex∼Ux log p(x). Analysis. The lower bound of the unsupervised objective Ex∼Ux log p(x) is a function of θ and φ. Therefore, updating this unsupervised objective will jointly optimize the parser and the generator. From the updating algorithm in Eq. 7, we can see that the parser pθ(y|x) is learned by using pseudo pair (x, ˆy) where ˆy is sampled from pθ(·|x). This updating process resembles the popular semi-supervised learning algorithm of self-train that predicts pseudo labels for unlabeled data (Lee, 2013) and then attaches the predicted labels to the unlabeled data as additional training data. In our algorithm, the pseudo sample (x, ˆy) will be weighted by the learning signal l(x, ˆy; φ), which decreases the impact of low-quality pseudo samples. Furthermore, from Eq. 8, the generator qφ(x|y) is updated using the pseudo sample (x, ˆy), which is similar to the semi-supervised learning method of back-boost that is widely used in Neural Machine Translation for low-resource language pairs (Sennrich et al., 2016). Given the target-side corpus, back-boost generates the pseudo sources to construct pseudo samples, which is added for model training. Similarly, to leverage the unlabeled data Uy for semi-supervised learning, following Ineq. 11, we could also have the lower bound for Ey∼Uy log p(y) as following, Ey∼Uy log p(y) ≥Ey∼Uy,x∼qφ(·|y)  log pθ(y|x) + p(x)  (12) which is the same as the lower bound of Ipd(x,y). Semi-supervised Joint Learning Objective. From the above discussions, we can deduce the lower bounds for the unsupervised objectives to be the same as the lower bounds of the dual information. We thus have the following semi-supervised joint-learning objective: max θ,φ J = X ⟨x,y⟩∈L log pθ(y|x) + log qφ(x|y)  2095 DATA Train Valid Test All ATIS 4,480 480 450 5,410 DJANGO 16,000 1,000 1,805 18,805 CONALA 90,000 5,000 5,000 100,000 Table 1: Statistics of datasets used for evaluation. Around 500K additional samples of low confidence from CONALA are retained for model pre-training. +λ X x∼Dx,ˆyi∼pθ(·|x) log qφ(x|ˆyi) + log q(ˆyi)  +λ X y∼Dy,ˆxi∼qφ(·|y) log pθ(y|ˆxi) + log p(ˆxi)  (13) where Dx = Ux ∪Lx and Dy = Uy ∪Ly. In this work, we weight the dual information and unsupervised objectives equally for simplicity, so the lower bounds over them are combined for joint optimization. We combine the labeled and unlabeled data to calculate the lower bounds to optimize the variational lower bounds of dual information and unsupervised objectives. 4 Experiments 4.1 Datasets Experiments are conducted on three datasets with sample pairs shown in Fig. 2: one for dialogue management which studies semantic parsing and generation from λ-calculus (Zettlemoyer and Collins, 2007) (ATIS) and two for code generation and summarization (DJANGO, CONALA). ATIS. This dataset has 5,410 pairs of queries (NL) from a flight booking system and corresponding λcalculus representation (MRs). The anonymized version from Dong and Lapata (2016) is used. DJANGO. It contains 18,805 lines of Python code snippets (Oda et al., 2015). Each snippet is annotated with a piece of human-written pseudo code. Similar to Yin and Neubig (2017), we replace strings separated by quotation marks with indexed place holder in NLs and MRs. CONALA. This is another Python-related corpus containing 598,237 intent/snippet pairs that are automatically mined from Stack Overflow (Yin et al., 2018a). Different from DJANGO, the intent in CONALA is mainly about the question on a specific topic instead of pseudo code. The full dataset contains noisy aligned pairs, and we keep the top 100,000 pairs of highest confidence scores for experiment and the rest for model pre-training. For DJANGO and CONALA, the NL utterances SEMANTIC PARSING (in Acc.) Pro. SUPER DIM SEMIDIM SELFTRAIN 1/4 64.7 69.0 71.9 66.3 1/2 78.1 78.8 80.8 79.2 full 84.6 85.3 – – Previous Supervised Methods (Pro. = full) Acc. SEQ2TREE (Dong and Lapata, 2016) 84.6 ASN (Rabinovich et al., 2017) 85.3 ASN+SUPATT (Rabinovich et al., 2017) 85.9 COARSE2FINE (Dong and Lapata, 2018) 87.7 NL GENERATION (in BLEU) Pro. SUPER DIM SEMIDIM BACKBOOST 1/4 36.9 37.7 39.1 40.9 1/2 39.1 40.7 40.9 39.3 full 39.3 40.6 – – Previous Supervised Methods (Pro. = full) BLEU DEEPCOM (Hu et al., 2018) 42.3 Table 2: Semantic parsing and NL generation results on ATIS. Pro.: proportion of the training samples used for training. Best result in each row is highlighted in bold. |full| = 4,434. are lowercased and tokenized and the tokens in code snippets are separated with space. Statistics of the datasets are summarized in Table 1. 4.2 Experimental Setups Joint-learning Setup. Before jointly learning the models, we pre-train the parser and the generator separately, using the labeled dataset, to enable the sampling of valid candidates with beam search when optimizing the lower bounds of dual information (Eqs. 7 and 8). The beam size is tuned from {3,5}. The parser and the generator are pretrained until convergence. We also learn the language models for NL and MRs on the training sets beforehand, which are not updated during joint learning. Joint learning stops when the parser or the generator does not get improved for 5 continuous iterations. λ is set to 0.1 for all the experiments. Additional descriptions about our setup are provided in Appendix C. For the semi-supervised setup, since ATIS and DJANGO do not have additional unlabeled corpus and it is hard to obtain in-domain NL utterances and MRs, we create a new partial training set from the original training set via subsampling, and the rest is used as the unlabeled corpus. For CONALA, we subsample data from the full training set to construct the new training set and unlabeled set instead of sampling from the low-quality corpus which will much boost the data volume. Evaluation Metrics. Accuracy (Acc.) is reported 2096 CODE GENERATION (in Acc.) Pro. SUPER DIM SEMIDIM BACKBOOST 1/8 42.3 44.9 47.2 47.0 1/4 50.2 51.1 54.5 51.7 3/8 52.2 53.7 54.6 55.3 1/2 56.3 58.4 59.2 58.9 full 65.1 66.6 – – Previous Supervised Methods (Pro. = full) Acc. LPN (Ling et al., 2016) 62.3 SNM (Yin and Neubig, 2017) 71.6 COARSE2FINE (Dong and Lapata, 2018) 74.1 CODE SUMMARIZATION (in BLEU) Pro. SUPER DIM SEMIDIM SELFTRAIN 1/8 54.1 56.0 58.5 54.4 1/4 57.1 61.4 62.7 58.0 3/8 63.0 64.3 64.6 63.0 1/2 65.2 66.3 66.7 65.4 full 68.1 70.8 – – Previous Supervised Methods (Pro. = full) BLEU DEEPCOM (Hu et al., 2018) 65.9 Table 3: Code generation and code summarization results on DJANGO. |full| = 16,000. for parser evaluation based on exact match, and BLEU-4 is adopted for generator evaluation. For the code generation task in CONALA, we use BLEU-4 following the setup in Yin et al. (2018a). Baselines. We compare our methods of DIM and SEMIDIM with the following baselines: SUPER: Train the parser or generator separately without joint learning. The models for the parser and generator are the same as DIM. SELFTRAIN (Lee, 2013): We use the pre-trained parser or generator to generate pseudo labels for the unlabeled sources, then the constructed pseudo samples will be mixed with the labeled data to fine-tune the pre-trained parser or generator. BACKBOOST: Adopted from the back translation method in Sennrich et al. (2016), which generates sources from unlabeled targets. The training process for BACKBOOST is the same as in SELFTRAIN. In addition to the above baselines, we also compare with popular supervised methods for each task, shown in the corresponding result tables. 4.3 Results and Further Analysis Main Results with Full- and Semi-supervision. Results on the three datasets with supervised and semi-supervised setups are presented in Tables 2, 3, and 4. For semi-supervised experiments on ATIS, we use the NL part as extra unlabeled samCODE GENERATION (in BLEU) Pro. SUPER DIM SEMIDIM BACKBOOST 1/2 8.6 9.6 9.5 9.0 full 11.1 12.4 – – CODE SUMMARIZATION (in BLEU) Pro. SUPER DIM SEMIDIM SELFTRAIN 1/2 13.4 14.5 15.1 12.7 full 22.5 24.8 – – Previous Supervised Methods (Pro. = full) BLEU CODE GEN.: NMT (Yin et al., 2018a) 10.7 CODE SUM.: DEEPCOM (Hu et al., 2018) 20.1 After Pre-training (in BLEU) CODE GEN. CODE SUM. Pro. SUPER DIM SUPER DIM 1/2 10.3 10.6 23.1 23.0 full 11.1 12.4 25.9 26.3 Previous Supervised Methods (Pro. = full) BLEU CODE GEN.: NMT (Yin et al., 2018a) 10.9 CODE SUM.: DEEPCOM (Hu et al., 2018) 26.5 Table 4: Code generation and code summarization results on CONALA. For semi-supervised learning (Pro. = 1/2), we sample 30K code snippets from the left data (not used as training data) as unlabeled samples. |full| = 90,000. ples following Yin et al. (2018b); for DJANGO and CONALA, unlabeled code snippets are utilized. We first note the consistent advantage of DIM over SUPER across all datasets and proportions of training samples for learning. This indicates that DIM is able to exploit the interaction between the dual tasks, and further improves the performance on both semantic parsing and NL generation. For semi-supervised scenarios, SEMIDIM, which employs unlabeled samples for learning, delivers stronger performance than DIM, which only uses labeled data. Moreover, SEMIDIM outperforms both SELFTRAIN and BACKBOOST, the two semi-supervised learning methods. This is attributed to SEMIDIM’s strategy of re-weighing pseudo samples based on the learning signals, which are indicative of their qualities, whereas SELFTRAIN and BACKBOOST treat all pseudo samples equally during learning. Additionally, we study the pre-training effect on CONALA. As can be seen in Table 4, pre-training further improves the performance of SUPER and DIM on both code generation and summarization. Model Analysis. Here we study whether DIM helps enhance the lower bounds of the expected joint distributions of NL and MRs. Specifically, lower bounds are calculated as in Eqs. 6 and 9 on the full training set for models of SUPER and 2097 -50 -40 -30 -20 -10 0 0.05 0.1 0.15 (a) ATIS z ( = -29.3) z' ( = -19.7) -30 -25 -20 -15 -10 -5 0 0.1 0.2 z ( = -14.7) z' ( = -12.6) -60 -40 -20 0 0.05 0.1 0.15 (b) DJANGO z ( = -33.8) z' ( = -27.1) -60 -50 -40 -30 -20 -10 0 0.05 0.1 z ( = -28.6) z' ( = -24.6) -80 -60 -40 -20 0 0.05 0.1 (c) CONALA z ( = -33.6) z' ( = -27.9) -100 -80 -60 -40 -20 0 0.04 0.08 z ( = -38.0) z' ( = -32.9) Figure 4: Lower bounds of the full training set. x-axis: lower bound value; y-axis: frequency. The left column is for semantic parsing, and the right column for NL generation. z is SUPER method and z′ is DIM. µ is the average lower bound, with significantly better values boldfaced (p < 0.01). 1 2 3 4 5 6 0 0.5 1 (a) ATIS 1 2 3 4 5 6 (b) DJANGO 1 2 3 4 (c) CONALA Figure 5: Distributions of the rank of learning signals over the gold-standard samples among the sampled set on unlabeled data using SEMIDIM (Pro. = 1/2). DIM. As displayed in Fig. 4, DIM better optimizes the lower bounds of both the parser and the generator, with significantly higher values of average lower bounds on the full data. These results further explains that when the lower bound of the primal model is improved, it produces learning signals of high quality for the dual model, leading to better performance on both tasks. As conjectured above, SEMIDIM outperforms SELFTRAIN in almost all setups because SEMIDIM re-weights the pseudo data with learning signals from the dual model. To demonstrate this, by giving the gold label for the unlabeled corpus, we rank the learning signal over the gold label among the sampled set using the semi-trained model, e.g. on ATIS, given an NL x from the dataset used as the unlabeled corpus, we consider the position of the learning signal l(x, y∗; φ) of gold-standard sample among all samples  l(x, ˆyi; φ)|ˆyi ∈S . As seen in Fig. 5, the gold candidates are almost always top-ranked, indicating that SEMIDIM is effective of separating pseudo samples of high and low-quality. PARSER SUPER š ›   ¡ ♂ GEN. DIM ATIS 84.6 84.2 85.3 DJANGO 65.1 65.8 66.6 CONALA 11.1 11.4 12.4 GENERATOR SUPER š ›   ¡ ♂ PARSER DIM ATIS 39.3 41.0 40.6 DJANGO 68.1 66.5 70.8 CONALA 22.5 23.0 24.8 Table 5: Ablation study with full training set by freezing ( š ›   ¡ ♂ ) model parameters for generator or parser during learning. Darker indicates higher values. 0 0.005 0.01 0.05 0.1 0.5 1 57 59 61 Acc. (a) CODE GENERATION DIM SEMI-DIM SUPER BACKBOOST 0 0.005 0.01 0.05 0.1 0.5 1 65 67 69 BLEU-4 (b) CODE SUMMARIZATION DIM SEMI-DIM SUPER SELFTRAIN Figure 6: Model performance with different λ values on DJANGO (Pro. = 1/2). Ablation Study. We conduct ablation studies by training DIM with the parameters of parser or generator frozen. The results are presented in Table 5. As anticipated, for both of parsing and generation, when the dual model is frozen, the performance of the primal model degrades. This again demonstrates DIM’s effectiveness of jointly optimizing both tasks. Intuitively, jointly updating both the primal and dual models allows a better learned dual model to provide high-quality learning signals, leading to an improved lower bound for the primal. As a result, freezing parameters of the dual model has a negative impact on the learning signal quality, which affects primal model learning. Effect of λ. λ controls the tradeoff between learning dual information and the unsupervised learning objective. Fig. 6 shows that the optimal model performance can be obtained when λ is within 0.1 ∼1. When λ is set to 0, the joint training only employs labeled samples, and its performance decreases significantly. A minor drop is observed at λ = 0.01, which is considered to result from the variance of learning signals derived from the REINFORCE algorithm. Correlation between Parser and Generator. 2098 57 57.5 58 58.5 acc. 64 65 66 67 BLEU-4 (a) DIM 57.5 58 58.5 59 59.5 acc. 65 66 67 68 BLEU-4 (b) SEMI-DIM Coef. = 0.922 Coef. = 0.807 Figure 7: Performance correlation between parser and generator. x-axis is for parser and y-axis is for generator. Coef. indicates Pearson correlation coefficient. We further study the performance correlation between the coupled parser and generator. Using the model outputs shown in Fig. 6, we run linear regressions of generator performance on parser performance, and a high correlation is observed between them (Fig. 7). 5 Related Work Semantic Parsing and NL Generation. Neural sequence-to-sequence models have achieved promising results on semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; Ling et al., 2016; Dong and Lapata, 2018) and natural language generation (Iyer et al., 2016; Konstas et al., 2017; Hu et al., 2018). To better model structured MRs, tree structures and more complicated graphs are explored for both parsing and generation (Dong and Lapata, 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; Song et al., 2018; Cheng et al., 2017; Alon et al., 2018). Semisupervised learning has been widely studied for semantic parsing (Yin et al., 2018b; Kocisk´y et al., 2016; Jia and Liang, 2016). Similar to our work, Chen and Zhou (2018) and Allamanis et al. (2015) study code retrieval and code summarization jointly to enhance both tasks. Here, we focus on the more challenging task of code generation instead of retrieval, and we also aim for generalpurpose MRs. Joint Learning in NLP. There has been growing interests in leveraging related NLP problems to enhance primal tasks (Collobert et al., 2011; Peng et al., 2017; Liu et al., 2016), e.g. sequence tagging (Collobert et al., 2011), dependency parsing (Peng et al., 2017), discourse analysis (Liu et al., 2016). Among those, multi-task learning (MTL) (Ando and Zhang, 2005) is a common method for joint learning, especially for neural networks where parameter sharing is utilized for representation learning. We follow the recent work on dual learning (Xia et al., 2017) to train dual tasks, where interactions can be employed to enhance both models. Dual learning has been successfully applied in NLP and computer vision problems, such as neural machine translation (He et al., 2016), question generation and answering (Tang et al., 2017), image-to-image translation (Yi et al., 2017; Zhu et al., 2017). Different from Xia et al. (2017) which minimizes the divergence between the two expected joint distributions, we aim to learn the expected distributions in a way similar to distribution matching (Gan et al., 2017). Furthermore, our method can be extended to semi-supervised scenario, prior to Xia et al. (2017)’s work which can only be applied in supervised setup. Following Zhang et al. (2018), we deduce the variational lower bounds of expected distributions via information maximization (Barber and Agakov, 2003). DIM aims to optimize the dual information instead of the two mutual information studied in Zhang et al. (2018). 6 Conclusion In this work, we propose to jointly train the semantic parser and NL generator by exploiting the structural connections between them. We introduce the method of DIM to exploit the duality, and provide a principled way to optimize the dual information. We further extend supervised DIM to semi-supervised scenario (SEMIDIM). Extensive experiments demonstrate the effectiveness of our proposed methods. To overcome the issue of poor labeled corpus for semantic parsing, some automatically mined datasets have been proposed, e.g. CONALA (Yin et al., 2018a) and STAQC (Yao et al., 2018). However, these datasets are noisy and it is hard to train robust models out of them. In the future, we will further apply DIM to learn semantic parser and NL generator from the noisy datasets. Acknowledgments The work described in this paper is supported by Research Grants Council of Hong Kong (PolyU 152036/17E) and National Natural Science Foundation of China (61672445). Lu Wang is supported by National Science Foundation through Grants IIS-1566382 and IIS-1813341. This work was done when Hai was a research assistant in PolyU from Oct. 2018 to March 2019. 2099 References Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A convolutional attention network for extreme summarization of source code. In Proceedings of ICML. Miltiadis Allamanis, Daniel Tarlow, Andrew D. Gordon, and Yi Wei. 2015. Bimodal modelling of source code and natural language. In Proceedings of ICML. Uri Alon, Omer Levy, and Eran Yahav. 2018. code2seq: Generating sequences from structured representations of code. CoRR, abs/1808.01400. Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817–1853. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. David Barber and Felix V. Agakov. 2003. The IM algorithm: A variational approach to information maximization. In Proceedings of NIPS. Qingying Chen and Minghui Zhou. 2018. A neural framework for retrieval and summarization of source code. In Proceedings of ASE. Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2017. Learning structured natural language representations for semantic parsing. In Proceedings of ACL. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of ACL. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of ACL. Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime G. Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In Proceedings of NAACL HLT. Zhe Gan, Liqun Chen, Weiyao Wang, Yunchen Pu, Yizhe Zhang, Hao Liu, Chunyuan Li, and Lawrence Carin. 2017. Triangle generative adversarial networks. In Proceedings of NIPS. Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of ACL. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Proceedings of NIPS. Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In Proceedings of ICPC. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of ACL. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of ACL. Tom´as Kocisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of EMNLP. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: sequence-to-sequence models for parsing and generation. In Proceedings of ACL. Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, Fumin Wang, and Andrew W. Senior. 2016. Latent predictor networks for code generation. In Proceedings of ACL. Yang Liu, Sujian Li, Xiaodong Zhang, and Zhifang Sui. 2016. Implicit discourse relation classification via multi-task neural networks. In Proceedings of AAAI. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of INTERSPEECH. Dipendra Kumar Misra and Yoav Artzi. 2016. Neural shift-reduce CCG semantic parsing. In Proceedings of EMNLP. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical machine translation (T). In Proceedings of ASE. 2100 Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304. Hao Peng, Sam Thomson, and Noah A. Smith. 2017. Deep multitask learning for semantic dependency parsing. In Proceedings of ACL. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of ACL. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amrto-text generation. In Proceedings of ACL. Duyu Tang, Nan Duan, Tao Qin, and Ming Zhou. 2017. Question answering and question generation as dual tasks. CoRR, abs/1706.02027. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Intergrating statistical and relational learning for semantic parsing. In Joint SIGDAT Conference on EMNLP and Very Large Corpora. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of NIPS. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning. Yuk Wah Wong and Raymond J. Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In Proceedings of NAACL-HLT. Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learning. In Proceedings of ICML. Ziyu Yao, Daniel S. Weld, Wei-Peng Chen, and Huan Sun. 2018. Staqc: A systematically mined questioncode dataset from stack overflow. In Proceedings of WWW. Zili Yi, Hao (Richard) Zhang, Ping Tan, and Minglun Gong. 2017. Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of ICCV. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018a. Learning to mine aligned code and natural language pairs from stack overflow. In Proceedings of MSR. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of ACL. Pengcheng Yin, Chunting Zhou, Junxian He, and Graham Neubig. 2018b. StructVAE: Tree-structured latent variable models for semi-supervised semantic parsing. In Proceedings of ACL. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of IAAI. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of UAI. Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of EMNLP-CoNLL. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Proceedings of NIPS. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of ICCV. 2101 A Model Details for the Parser and Generator The parser and generator have the same seq2seq framework. We take the parser for example. Given the NL utterance x and the linearized MR y, we use bi-LSTM to encode x into context vectors, and then a LSTM decoder generates y from the context vectors. The parser pθ(y|x) is formulated as following: pθ(y|x) = |y| Y t=1 pθ(yt|y<t, x) (14) where y<t = y1 · · · yt−1. The hidden state vector at time t from the encoder is the concatenation of forward hidden vector −→ h t and backward one ←− h t, denoted as ht = [−→ h t, ←− h t]. With the LSTM unit fLSTMe from the encoder, we have −→ h t = fLSTMe(xt, −→ h t−1) and ←− h t = fLSTMe(xt, ←− h t+1). From the decoder side, using the decoder LSTM unit fLSTMd, we have the hidden state vector at time t as st = fLSTMd(yt−1, st−1). Global attention mechanism (Luong et al., 2015) is applied to obtain the context vector ct at time t: ct = |x| X i=1 αt,ihi where αt,i is the attention weight and is specified as: αt,i = exp(Watt[st; hi]) P|x| k=1 exp(Watt[sthk]) (15) where Watt is the learnable parameters. At time t, with hidden state st in the decoder and context vector ct from the encoder, we have the prediction probability for yt: pvocab(yt|y<t, x) = fsoftmax(Wd1 · tanh(Wd2[st; ct])) where Wd1 and Wd2 are learnable parameters. We further apply the pointer-network (Vinyals et al., 2015) to copy tokens from the input to alleviate the out-of-vocabulary (OOV) issue. We adopt the calculation flows for copying mechanism from Yin et al. (2018b), readers can refer to that paper for further details. B Marginal Distributions To estimate the marginal distributions p(x) and q(y), we learn the LSTM language models over the NL utterances and MRs. MRs are linearized. Suppose given the NL x = {xi}|x| i=1, the learning objective is: p(x) = |x| Y i=1 p(xi|x<i) (16) where x<i = x1 · · · xi−1. At time t, we have the following probability to predict xt: p(xt|x<t) = fsoftmax(W · ht + b) (17) Here, ht is estimated using the LSTM network: ht = fLSTM(xt, ht−1) (18) The above marginal distribution estimation for NLs is also applied to linearized MRs. C Experimental Setups C.1 Marginal Distribution We pre-train the language models on the full training set before joint learning and the language mdoels will be fixed in the following experiments. The embedding size is selected from {128, 256} and the hidden size is tuned from {256, 512}, which are both evaluated on the validation set. We use SGD to update the models. Early stopping is applied and the training will be stopped if the ppl value does not decrease for continuous 5 times. C.2 Model Configuration To conduct the joint learning using DIM and SEMIDIM, we have to firstly train the parser and generator separately referred as the method of SUPER. To pre-train the parser and generator, we tune the embedding size from {125, 150, 256} and hidden size from {256, 300, 512}. The batch size is selected from {10, 16} varying over the datasets. Early stopping is applied and the patience time is set to 5. Initial learning rate is 0.001. Adam is adopted to optimize the models. The parser and generator will be trained until convergence. After the pre-training, we conduct joint learning based on the pre-trained parser and generator. The learning rate will be slowed down to 0.00025. The beam size for sampling is tuned from {3, 5}.
2019
201
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2102–2113 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2102 Learning to Select, Track, and Generate for Data-to-Text Hayate Iso ∗† Yui Uehara‡ Tatsuya Ishigaki♮‡ Hiroshi Noji‡ Eiji Aramaki†‡ Ichiro Kobayashi♭‡ Yusuke Miyao♯‡ Naoaki Okazaki♮‡ Hiroya Takamura♮‡ †Nara Institute of Science and Technology ‡Artificial Intelligence Research Center, AIST ♮Tokyo Institute of Technology ♭Ochanomizu University ♯The University of Tokyo {iso.hayate.id3,aramaki}@is.naist.jp [email protected] {yui.uehara,ishigaki.t,hiroshi.noji,takamura.hiroya}@aist.go.jp [email protected] [email protected] Abstract We propose a data-to-text generation model with two modules, one for tracking and the other for text generation. Our tracking module selects and keeps track of salient information and memorizes which record has been mentioned. Our generation module generates a summary conditioned on the state of tracking module. Our model is considered to simulate the human-like writing process that gradually selects the information by determining the intermediate variables while writing the summary. In addition, we also explore the effectiveness of the writer information for generation. Experimental results show that our model outperforms existing models in all evaluation metrics even without writer information. Incorporating writer information further improves the performance, contributing to content planning and surface realization. 1 Introduction Advances in sensor and data storage technologies have rapidly increased the amount of data produced in various fields such as weather, finance, and sports. In order to address the information overload caused by the massive data, datato-text generation technology, which expresses the contents of data in natural language, becomes more important (Barzilay and Lapata, 2005). Recently, neural methods can generate high-quality short summaries especially from small pieces of data (Liu et al., 2018). Despite this success, it remains challenging to generate a high-quality long summary from data (Wiseman et al., 2017). One reason for the difficulty is because the input data is too large for a naive model to find its salient part, i.e., to determine which part of the data should be mentioned. ∗Work was done during the internship at Artificial Intelligence Research Center, AIST In addition, the salient part moves as the summary explains the data. For example, when generating a summary of a basketball game (Table 1 (b)) from the box score (Table 1 (a)), the input contains numerous data records about the game: e.g., Jordan Clarkson scored 18 points. Existing models often refer to the same data record multiple times (Puduppully et al., 2019). The models may mention an incorrect data record, e.g., Kawhi Leonard added 19 points: the summary should mention LaMarcus Aldridge, who scored 19 points. Thus, we need a model that finds salient parts, tracks transitions of salient parts, and expresses information faithful to the input. In this paper, we propose a novel data-totext generation model with two modules, one for saliency tracking and another for text generation. The tracking module keeps track of saliency in the input data: when the module detects a saliency transition, the tracking module selects a new data record1 and updates the state of the tracking module. The text generation module generates a document conditioned on the current tracking state. Our model is considered to imitate the human-like writing process that gradually selects and tracks the data while generating the summary. In addition, we note some writer-specific patterns and characteristics: how data records are selected to be mentioned; and how data records are expressed as text, e.g., the order of data records and the word usages. We also incorporate writer information into our model. The experimental results demonstrate that, even without writer information, our model achieves the best performance among the previous models in all evaluation metrics: 94.38% precision of relation generation, 42.40% F1 score of content selection, 19.38% normalized Damerau-Levenshtein 1We use ‘data record’ and ‘relation’ interchangeably. 2103 Distance (DLD) of content ordering, and 16.15% of BLEU score. We also confirm that adding writer information further improves the performance. 2 Related Work 2.1 Data-to-Text Generation Data-to-text generation is a task for generating descriptions from structured or non-structured data including sports commentary (Tanaka-Ishii et al., 1998; Chen and Mooney, 2008; Taniguchi et al., 2019), weather forecast (Liang et al., 2009; Mei et al., 2016), biographical text from infobox in Wikipedia (Lebret et al., 2016; Sha et al., 2018; Liu et al., 2018) and market comments from stock prices (Murakami et al., 2017; Aoki et al., 2018). Neural generation methods have become the mainstream approach for data-to-text generation. The encoder-decoder framework (Cho et al., 2014; Sutskever et al., 2014) with the attention (Bahdanau et al., 2015; Luong et al., 2015) and copy mechanism (Gu et al., 2016; Gulcehre et al., 2016) has successfully applied to data-to-text tasks. However, neural generation methods sometimes yield fluent but inadequate descriptions (Tu et al., 2017). In data-to-text generation, descriptions inconsistent to the input data are problematic. Recently, Wiseman et al. (2017) introduced the ROTOWIRE dataset, which contains multisentence summaries of basketball games with boxscore (Table 1). This dataset requires the selection of a salient subset of data records for generating descriptions. They also proposed automatic evaluation metrics for measuring the informativeness of generated summaries. Puduppully et al. (2019) proposed a two-stage method that first predicts the sequence of data records to be mentioned and then generates a summary conditioned on the predicted sequences. Their idea is similar to ours in that the both consider a sequence of data records as content planning. However, our proposal differs from theirs in that ours uses a recurrent neural network for saliency tracking, and that our decoder dynamically chooses a data record to be mentioned without fixing a sequence of data records. 2.2 Memory modules The memory network can be used to maintain and update representations of the salient information (Weston et al., 2015; Sukhbaatar et al., 2015; Graves et al., 2016). This module is often used in natural language understanding to keep track of the entity state (Kobayashi et al., 2016; Hoang et al., 2018; Bosselut et al., 2018). Recently, entity tracking has been popular for generating coherent text (Kiddon et al., 2016; Ji et al., 2017; Yang et al., 2017; Clark et al., 2018). Kiddon et al. (2016) proposed a neural checklist model that updates predefined item states. Ji et al. (2017) proposed an entity representation for the language model. Updating entity tracking states when the entity is introduced, their method selects the salient entity state. Our model extends this entity tracking module for data-to-text generation tasks. The entity tracking module selects the salient entity and appropriate attribute in each timestep, updates their states, and generates coherent summaries from the selected data record. 3 Data Through careful examination, we found that in the original dataset ROTOWIRE, some NBA games have two documents, one of which is sometimes in the training data and the other is in the test or validation data. Such documents are similar to each other, though not identical. To make this dataset more reliable as an experimental dataset, we created a new version. We ran the script provided by Wiseman et al. (2017), which is for crawling the ROTOWIRE website for NBA game summaries. The script collected approximately 78% of the documents in the original dataset; the remaining documents disappeared. We also collected the box-scores associated with the collected documents. We observed that some of the box-scores were modified compared with the original ROTOWIRE dataset. The collected dataset contains 3,752 instances (i.e., pairs of a document and box-scores). However, the four shortest documents were not summaries; they were, for example, an announcement about the postponement of a match. We thus deleted these 4 instances and were left with 3,748 instances. We followed the dataset split by Wiseman et al. (2017) to split our dataset into training, development, and test data. We found 14 instances that didn’t have corresponding instances in the original data. We randomly classified 9, 2, and 3 of those 14 instances respectively into training, development, and test data. Finally, the sizes of 2104 TEAM H/V WIN LOSS PTS REB AST FG PCT FG3 PCT . . . KNICKS H 16 19 104 46 26 45 46 . . . BUCKS V 18 16 105 42 20 47 32 . . . PLAYER H/V PTS REB AST BLK STL MIN CITY . . . CARMELO ANTHONY H 30 11 7 0 2 37 NEW YORK . . . DERRICK ROSE H 15 3 4 0 1 33 NEW YORK . . . COURTNEY LEE H 11 2 3 1 1 38 NEW YORK . . . GIANNIS ANTETOKOUNMPO V 27 13 4 3 1 39 MILWAUKEE . . . GREG MONROE V 18 9 4 1 3 31 MILWAUKEE . . . JABARI PARKER V 15 4 3 0 1 37 MILWAUKEE . . . MALCOLM BROGDON V 12 6 8 0 0 38 MILWAUKEE . . . MIRZA TELETOVIC V 13 1 0 0 0 21 MILWAUKEE . . . JOHN HENSON V 2 2 0 0 0 14 MILWAUKEE . . . . . . . . . . . . . . . . . . . . . . . . (a) Box score: Top contingency table shows number of wins and losses and summary of each game. Bottom table shows statistics of each player such as points scored (PLAYER’s PTS), and total rebounds (PLAYER’s REB). The Milwaukee Bucks defeated the New York Knicks, 105-104, at Madison Square Garden on Wednesday. The Knicks (16-19) checked in to Wednesday’s contest looking to snap a five-game losing streak and heading into the fourth quarter, they looked like they were well on their way to that goal. . . . Antetokounmpo led the Bucks with 27 points, 13 rebounds, four assists, a steal and three blocks, his second consecutive double-double. Greg Monroe actually checked in as the second-leading scorer and did so in his customary bench role, posting 18 points, along with nine boards, four assists, three steals and a block. Jabari Parker contributed 15 points, four rebounds, three assists and a steal. Malcolm Brogdon went for 12 points, eight assists and six rebounds. Mirza Teletovic was productive in a reserve role as well, generating 13 points and a rebound. . . . Courtney Lee checked in with 11 points, three assists, two rebounds, a steal and a block. . . . The Bucks and Knicks face off once again in the second game of the home-and-home series, with the meeting taking place Friday night in Milwaukee. (b) NBA basketball game summary: Each summary consists of game victory or defeat of the game and highlights of valuable players. Table 1: Example of input and output data: task defines box score (1a) used for input and summary document of game (1b) used as output. Extracted entities are shown in bold face. Extracted values are shown in green. t 199 200 201 202 203 204 205 206 207 208 209 Yt Jabari Parker contributed 15 points , four rebounds , three assists Zt 1 1 0 1 0 0 1 0 0 1 0 Et JABARI JABARI JABARI JABARI JABARI PARKER PARKER PARKER PARKER PARKER At FIRST NAME LAST NAME PLAYER PTS PLAYER REB PLAYER AST Nt 0 1 1 Table 2: Running example of our model’s generation process. At every time step t, model predicts each random variable. Model firstly determines whether to refer to data records (Zt = 1) or not (Zt = 0). If random variable Zt = 1, model selects entity Et, its attribute At and binary variables Nt if needed. For example, at t = 202, model predicts random variable Z202 = 1 and then selects the entity JABARI PARKER and its attribute PLAYER PTS. Given these values, model outputs token 15 from selected data record. our training, development, test dataset are respectively 2,714, 534, and 500. On average, each summary has 384 tokens and 644 data records. Each match has only one summary in our dataset, as far as we checked. We also collected the writer of each document. Our dataset contains 32 different writers. The most prolific writer in our dataset wrote 607 documents. There are also writers who wrote less than ten documents. On average, each writer wrote 117 documents. We call our new dataset ROTOWIRE-MODIFIED.2 4 Saliency-Aware Text Generation At the core of our model is a neural language model with a memory state hLM to generate a summary y1:T = (y1, . . . , yT ) given a set of data records x. Our model has another memory state hENT, which is used to remember the data records 2For information about the dataset, please follow this link: https://github.com/aistairc/ rotowire-modified that have been referred to. hENT is also used to update hLM, meaning that the referred data records affect the text generation. Our model decides whether to refer to x, which data record r ∈x to be mentioned, and how to express a number. The selected data record is used to update hENT. Formally, we use the four variables: 1. Zt: binary variable that determines whether the model refers to input x at time step t (Zt = 1). 2. Et: At each time step t, this variable indicates the salient entity (e.g., HAWKS, LEBRON JAMES). 3. At: At each time step t, this variable indicates the salient attribute to be mentioned (e.g., PTS). 4. Nt: If attribute At of the salient entity Et is a numeric attribute, this variable determines if a value in the data records should be output in Arabic numerals (e.g., 50) or in English words (e.g., five). To keep track of the salient entity, our model predicts these random variables at each time step 2105 t through its summary generation process. Running example of our model is shown in Table 2 and full algorithm is described in Appendix A. In the following subsections, we explain how to initialize the model, predict these random variables, and generate a summary. Due to space limitations, bias vectors are omitted. Before explaining our method, we describe our notation. Let E and A denote the sets of entities and attributes, respectively. Each record r ∈x consists of entity e ∈E, attribute a ∈A, and its value x[e, a], and is therefore represented as r = (e, a, x[e, a]). For example, the boxscore in Table 1 has a record r such that e = ANTHONY DAVIS, a = PTS, and x[e, a] = 20. 4.1 Initialization Let r denote the embedding of data record r ∈x. Let ¯e denote the embedding of entity e. Note that ¯e depends on the set of data records, i.e., it depends on the game. We also use e for static embedding of entity e, which, on the other hand, does not depend on the game. Given the embedding of entity e, attribute a, and its value v, we use the concatenation layer to combine the information from these vectors to produce the embedding of each data record (e, a, v), denoted as re,a,v as follows: re,a,v = tanh W R(e ⊕a ⊕v)  , (1) where ⊕indicates the concatenation of vectors, and W R denotes a weight matrix.3 We obtain ¯e in the set of data records x, by summing all the data-record embeddings transformed by a matrix: ¯e = tanh X a∈A W A a re,a,x[e,a] ! , (2) where W A a is a weight matrix for attribute a. Since ¯e depends on the game as above, ¯e is supposed to represent how entity e played in the game. To initialize the hidden state of each module, we use embeddings of <SOD> for hLM and averaged embeddings of ¯e for hENT. 4.2 Saliency transition Generally, the saliency of text changes during text generation. In our work, we suppose that the 3We also concatenate the embedding vectors that represents whether the entity is in home or away team. saliency is represented as the entity and its attribute being talked about. We therefore propose a model that refers to a data record at each timepoint, and transitions to another as text goes. To determine whether to transition to another data record or not at time t, the model calculates the following probability: p(Zt = 1 | hLM t−1, hENT t−1) = σ(W z(hLM t−1 ⊕hENT t−1)), (3) where σ(·) is the sigmoid function. If p(Zt = 1 | hLM t−1, hENT t−1) is high, the model transitions to another data record. When the model decides to transition to another, the model then determines which entity and attribute to refer to, and generates the next word (Section 4.3). On the other hand, if the model decides not transition to another, the model generates the next word without updating the tracking states hENT t = hENT t−1 (Section 4.4). 4.3 Selection and tracking When the model refers to a new data record (Zt = 1), it selects an entity and its attribute. It also tracks the saliency by putting the information about the selected entity and attribute into the memory vector hENT. The model begins to select the subject entity and update the memory states if the subject entity will change. Specifically, the model first calculates the probability of selecting an entity: p(Et = e | hLM t−1, hENT t−1) ∝ ( exp hENT s W OLDhLM t−1  if e ∈Et−1 exp ¯eW NEWhLM t−1  otherwise , (4) where Et−1 is the set of entities that have already been referred to by time step t, and s is defined as s = max{s : s ≤t −1, e = es}, which indicates the time step when this entity was last mentioned. The model selects the most probable entity as the next salient entity and updates the set of entities that appeared (Et = Et−1 ∪{et}). If the salient entity changes (et ̸= et−1), the model updates the hidden state of the tracking model hENT with a recurrent neural network with a gated recurrent unit (GRU; Chung et al., 2014): hENT′ t =      hENT t−1 if et = et−1 GRUE(¯e, hENT t−1) else if et ̸∈Et−1 GRUE(W S shENT s , hENT t−1) otherwise. (5) 2106 Note that if the selected entity at time step t, et, is identical to the previously selected entity et−1, the hidden state of the tracking model is not updated. If the selected entity et is new (et ̸∈Et−1), the hidden state of the tracking model is updated with the embedding ¯e of entity et as input. In contrast, if entity et has already appeared in the past (et ∈ Et−1) but is not identical to the previous one (et ̸= et−1), we use hENT s (i.e., the memory state when this entity last appeared) to fully exploit the local history of this entity. Given the updated hidden state of the tracking model hENT t , we next select the attribute of the salient entity by the following probability: p(At = a | et, hLM t−1, hENT′ t ) (6) ∝exp  ret,a,x[et,a]W ATTR(hLM t−1 ⊕hENT′ t )  . After selecting at, i.e., the most probable attribute of the salient entity, the tracking model updates the memory state hENT t with the embedding of the data record ret,at,x[et,at] introduced in Section 4.1: hENT t = GRUA(ret,at,x[et,at], hENT′ t ). (7) 4.4 Summary generation Given two hidden states, one for language model hLM t−1 and the other for tracking model hENT t , the model generates the next word yt. We also incorporate a copy mechanism that copies the value of the salient data record x[et, at]. If the model refers to a new data record (Zt = 1), it directly copies the value of the data record x[et, at]. However, the values of numerical attributes can be expressed in at least two different manners: Arabic numerals (e.g., 14) and English words (e.g., fourteen). We decide which one to use by the following probability: p(Nt = 1 | hLM t−1, hENT t ) = σ(W N(hLM t−1 ⊕hENT t )), (8) where W N is a weight matrix. The model then updates the hidden states of the language model: h′ t = tanh W H(hLM t−1 ⊕hENT t )  , (9) where W H is a weight matrix. If the salient data record is the same as the previous one (Zt = 0), it predicts the next word yt via a probability over words conditioned on the context vector h′ t: p(Yt | h′ t) = softmax(W Yh′ t). (10) Subsequently, the hidden state of language model hLM is updated: hLM t = LSTM(yt ⊕h′ t, hLM t−1), (11) where yt is the embedding of the word generated at time step t.4 4.5 Incorporating writer information We also incorporate the information about the writer of the summaries into our model. Specifically, instead of using Equation (9), we concatenate the embedding w of a writer to hLM t−1 ⊕hENT t to construct context vector h′ t: h′ t = tanh W ′H(hLM t−1 ⊕hENT t ⊕w)  , (12) where W ′H is a new weight matrix. Since this new context vector h′ t is used for calculating the probability over words in Equation (10), the writer information will directly affect word generation, which is regarded as surface realization in terms of traditional text generation. Simultaneously, context vector h′ t enhanced with the writer information is used to obtain hLM t , which is the hidden state of the language model and is further used to select the salient entity and attribute, as mentioned in Sections 4.2 and 4.3. Therefore, in our model, the writer information affects both surface realization and content planning. 4.6 Learning objective We apply fully supervised training that maximizes the following log-likelihood: log p(Y1:T , Z1:T , E1:T , A1:T , N1:T | x) = T X t=1 log p(Zt = zt | hLM t−1, hENT t−1) + X t:Zt=1 log p(Et = et | hLM t−1, hENT t−1) + X t:Zt=1 log p(At = at | et, hLM t−1, hENT′ t ) + X t:Zt=1,atis num attr log p(Nt = nt | hLM t−1, hENT t ) + X t:Zt=0 log p(Yt = yt | h′ t) 4In our initial experiment, we observed a word repetition problem when the tracking model is not updated during generating each sentence. To avoid this problem, we also update the tracking model with special trainable vectors vREFRESH to refresh these states after our model generates a period: hENT t = GRUA(vREFRESH, hENT t ) 2107 Method RG CS CO BLEU # P% P% R% F1% DLD% GOLD 27.36 93.42 100. 100. 100. 100. 100. TEMPLATES 54.63 100. 31.01 58.85 40.61 17.50 8.43 Wiseman et al. (2017) 22.93 60.14 24.24 31.20 27.29 14.70 14.73 Puduppully et al. (2019) 33.06 83.17 33.06 43.59 37.60 16.97 13.96 PROPOSED 39.05 94.43 35.77 52.05 42.40 19.38 16.15 Table 3: Experimental result. Each metric evaluates whether important information (CS) is described accurately (RG) and in correct order (CO). 5 Experiments 5.1 Experimental settings We used ROTOWIRE-MODIFIED as the dataset for our experiments, which we explained in Section 3. The training, development, and test data respectively contained 2,714, 534, and 500 games. Since we take a supervised training approach, we need the annotations of the random variables (i.e., Zt, Et, At, and Nt) in the training data, as shown in Table 2. Instead of simple lexical matching with r ∈x, which is prone to errors in the annotation, we use the information extraction system provided by Wiseman et al. (2017). Although this system is trained on noisy rule-based annotations, we conjecture that it is more robust to errors because it is trained to minimize the marginalized loss function for ambiguous relations. All training details are described in Appendix B. 5.2 Models to be compared We compare our model5 against two baseline models. One is the model used by Wiseman et al. (2017), which generates a summary with an attention-based encoder-decoder model. The other baseline model is the one proposed by Puduppully et al. (2019), which first predicts the sequence of data records and then generates a summary conditioned on the predicted sequences. Wiseman et al. (2017)’s model refers to all data records every timestep, while Puduppully et al. (2019)’s model refers to a subset of all data records, which is predicted in the first stage. Unlike these models, our model uses one memory vector hENT t that tracks the history of the data records, during generation. We retrained the baselines on our new dataset. We also present the performance of the GOLD and 5Our code is available from https://github.com/ aistairc/sports-reporter TEMPLATES summaries. The GOLD summary is exactly identical with the reference summary and each TEMPLATES summary is generated in the same manner as Wiseman et al. (2017). In the latter half of our experiments, we examine the effect of adding information about writers. In addition to our model enhanced with writer information, we also add writer information to the model by Puduppully et al. (2019). Their method consists of two stages corresponding to content planning and surface realization. Therefore, by incorporating writer information to each of the two stages, we can clearly see which part of the model to which the writer information contributes to. For Puduppully et al. (2019) model, we attach the writer information in the following three ways: 1. concatenating writer embedding w with the input vector for LSTM in the content planning decoder (stage 1); 2. concatenating writer embedding w with the input vector for LSTM in the text generator (stage 2); 3. using both 1 and 2 above. For more details about each decoding stage, readers can refer to Puduppully et al. (2019). 5.3 Evaluation metrics As evaluation metrics, we use BLEU score (Papineni et al., 2002) and the extractive metrics proposed by Wiseman et al. (2017), i.e., relation generation (RG), content selection (CS), and content ordering (CO) as evaluation metrics. The extractive metrics measure how well the relations extracted from the generated summary match the correct relations6: 6The model for extracting relation tuples was trained on tuples made from the entity (e.g., team name, city name, player name) and attribute value (e.g., “Lakers”, “92”) ex2108 - RG: the ratio of the correct relations out of all the extracted relations, where correct relations are relations found in the input data records x. The average number of extracted relations is also reported. - CS: precision and recall of the relations extracted from the generated summary against those from the reference summary. - CO: edit distance measured with normalized Damerau-Levenshtein Distance (DLD) between the sequences of relations extracted from the generated and reference summary. 6 Results and Discussions We first focus on the quality of tracking model and entity representation in Sections 6.1 to 6.4, where we use the model without writer information. We examine the effect of writer information in Section 6.5. 6.1 Saliency tracking-based model As shown in Table 3, our model outperforms all baselines across all evaluation metrics.7 One of the noticeable results is that our model achieves slightly higher RG precision than the gold summary. Owing to the extractive evaluation nature, the generated summary of the precision of the relation generation could beat the gold summary performance. In fact, the template model achieves 100% precision of the relation generations. The other is that only our model exceeds the template model regarding F1 score of the content selection and obtains the highest performance of content ordering. This imply that the tracking model encourages to select salient input records in the correct order. 6.2 Qualitative analysis of entity embedding Our model has the entity embedding ¯e, which depends on the box score for each game in addition to static entity embedding e. Now we analyze the difference of these two types of embeddings. We present a two-dimensional visualizations of both embeddings produced using PCA (Pearson, tracted from the summaries, and the corresponding attributes (e.g., “TEAM NAME”, “PTS”) found in the box- or line-score. The precision and the recall of this extraction model are respectively 93.4% and 75.0% in the test data. 7The scores of Puduppully et al. (2019)’s model significantly dropped from what they reported, especially on BLEU metric. We speculate this is mainly due to the reduced amount of our training data (Section 3). That is, their model might be more data-hungry than other models. Solomon Hill Joe Ingles PJ Tucker Kevin Garnett Jimmy Butler Shabazz Muhammad Carlos Boozer Jonathan Gibson Bobby Portis Jabari Brown James Jones Giannis Antetokounmpo Deron Williams Patrick Beverley Shavlik Randolph Alan Williams Jared Sullinger Joel Freeland Amir Johnson Dahntay Jones Amar'e Stoudemire Mario Hezonja Nene Terrence Ross Larry Drew II Tony Snell Tyler Ulis Cheick Diallo Jamal Crawford Devin Harris Gordon Hayward Alex Stepheson Hedo Turkoglu C.J. Watson Joffrey Lauvergne Karl-Anthony Towns Dennis Schroder Jarell Eddie Isaiah Whitehead Jordan McRae Joel Embiid Otto Porter Jr. Arron Afflalo Jae Crowder Jannero Pargo Al Horford Boris Diaw Serge Ibaka Tyreke Evans Shaun Livingston Andre Roberson Dwight Howard Alexis Ajinca Roy Hibbert Hollis Thompson Gerald Green Isaiah Canaan Pascal Siakam Michael Kidd-Gilchrist Darrun Hilliard Mario Chalmers Kelly Oubre Jr. John Henson Shelvin Mack Quincy Pondexter Thomas Robinson Brian Roberts Kris Humphries Jerome Jordan Mike Miller Alec Burks Zach Randolph Ricky Rubio Nikola Mirotic Randy Foye Danny Granger Nikola Pekovic Ed Davis Francisco Garcia Donald Sloan Nick Collison Evan Fournier Nerlens Noel Jordan Hill Semaj Christon Ersan Ilyasova Jarell Martin Ben Gordon Damien Inglis Marreese Speights Georgios Papagiannis Jon Leuer Denzel Valentine Clint Capela Willie Green Sean Kilpatrick Chris Douglas-Roberts DeMarcus Cousins Shawn Marion Trey Burke Brook Lopez James Harden Kevon Looney Malcolm Brogdon Norris Cole Lance Thomas Jeremy Lin Klay Thompson Reggie Jackson Channing Frye Joe Harris Jerian Grant Dion Waiters Tiago Splitter Anthony Morrow Aaron Brooks Andrew Wiggins Jusuf Nurkic Greg Monroe Justise Winslow Ty Lawson Bryce Cotton Dorell Wright Tim Hardaway Jr. Carmelo Anthony Alex Len DeMarre Carroll DeMar DeRozan Jason Smith Kevin Love Marco Belinelli Bruno Caboclo KJ McDaniels Mike Muscala Spencer Hawes Marcin Gortat Alex Kirk Donatas Motiejunas Jordan Adams Andrew Harrison Nick Calathes Tomas Satoransky Tyrus Thomas Lance Stephenson Raul Neto Stanley Johnson T.J. McConnell Rasual Butler Ricky Ledo Luis Scola Dario Saric Brandon Rush Alex Abrines Darius Miller Evan Turner Doug McDermott Chris Johnson Steve Novak Tarik Black Blake Griffin Goran Dragic Derrick Rose Matt Bonner Omer Asik Courtney Lee Okaro White Gerald Henderson Shawn Long Jordan Clarkson Mike Scott Derrick Favors Eric Gordon David West Kyle O'Quinn Dirk Nowitzki Kyrie Irving Kendrick Perkins Victor Oladipo JaMychal Green Chris Bosh Buddy Hield Bismack Biyombo Cody Zeller Russ Smith Jimmer Fredette Garrett Temple Dante Cunningham Ish Smith Robert Sacre Sam Dekker Malcolm Delaney Raymond Felton Austin Daye Wayne Ellington Vince Carter Kris Dunn Andre Iguodala Shawne Williams Eric Bledsoe Corey Brewer Paul George Alonzo Gee Nemanja Nedovic Andre Drummond Reggie Evans Dante Exum Reggie Bullock Bradley Beal Kirk Hinrich Steve Blake Mike Dunleavy CJ McCollum Marcus Smart Jared Cunningham Kevin Durant Jorge Gutierrez Taj Gibson T.J. Warren Ian Mahinmi Drew Gooden Nick Young Anthony Brown O.J. Mayo Trevor Ariza LeBron James Ray McCallum Rondae Hollis-Jefferson Skal Labissiere Ryan Hollins Andrea Bargnani Allen Crabbe Chris Kaman Sheldon Mac Robin Lopez Kobe Bryant Shane Larkin Marcus Thornton Matt Barnes Chase Budinger Troy Williams Marvin Williams Kemba Walker Luc Mbah a Moute Kenyon Martin Cameron Payne Austin Rivers Jabari Parker Elliot Williams Justin Hamilton Justin Holiday Ryan Anderson Emmanuel Mudiay Omri Casspi Jahlil Okafor Aaron Gordon Ron Baker Avery Bradley Derrick Williams Kyle Korver Ronny Turiaf Juancho Hernangomez Cory Jefferson DeAndre Liggins Greivis Vasquez Tobias Harris Kyle Anderson D.J. Augustin Rodney McGruder Bernard James Andrew Goudelock Tyler Hansbrough Ian Clark Festus Ezeli Tyler Johnson Tyler Zeller Troy Daniels Fred VanVleet John Lucas III JJ Redick Brandon Bass Damian Jones Jeff Teague Paul Millsap Lavoy Allen Jared Dudley Zaza Pachulia Danny Green Henry Walker Chris Copeland Miles Plumlee Matthew Dellavedova Cristiano Felicio Isaiah Thomas Jerami Grant Jarrod Uthoff Sasha Vujacic JR Smith Malachi Richardson Adreian Payne CJ Miles Meyers Leonard Stephen Curry Jose Calderon John Wall Elijah Millsap Tony Allen Brandon Davies Harrison Barnes JaKarr Sampson Rajon Rondo Dorian Finney-Smith Brandon Jennings Lou Williams DeAndre Jordan Danilo Gallinari Kelly Olynyk Kosta Koufos Jeff Withey Hassan Whiteside Rodney Stuckey Domantas Sabonis Quincy Acy Jarrett Jack Glen Davis Quincy Miller AJ Price Khris Middleton Gary Neal James Young JaVale McGee Michael Beasley Deyonta Davis Russell Westbrook Anthony Davis E'Twaun Moore Jonas Valanciunas Tony Wroten Perry Jones III Henry Sims Spencer Dinwiddie Carl Landry Markel Brown Wesley Matthews Dwyane Wade Dewayne Dedmon Mason Plumlee Tim Frazier Gorgui Dieng Brandon Knight Ramon Sessions Darrell Arthur Nicolas Batum Iman Shumpert Gary Harris Yogi Ferrell Tyler Ennis Beno Udrih Draymond Green Danuel House Jr. Al Jefferson Cole Aldrich Thaddeus Young Frank Kaminsky J.J. Barea Nate Wolters Marcelo Huertas Kevin Seraphin Manny Harris Devyn Marble Chris Andersen Mitch McGary Davis Bertans Cory Joseph Aron Baynes Alexey Shved Charlie Villanueva Kyle Singler Kristaps Porzingis Damian Lillard DeJuan Blair Tyson Chandler Joey Dorsey Tony Parker Johnny O'Bryant III Willy Hernangomez Kawhi Leonard Jeff Ayres Marcus Morris Bojan Bogdanovic Zoran Dragic Jonathon Simmons Salah Mejri Nemanja Bjelica Alan Anderson Larry Nance Jr. Chris McCullough Shabazz Napier Michael Carter-Williams Norman Powell Timothe Luwawu-Cabarrot LaMarcus Aldridge Sebastian Telfair Justin Anderson Wesley Johnson Furkan Aldemir Montrezl Harrell Brice Johnson Tristan Thompson Tim Duncan Nikola Jokic Zach LaVine Mo Williams Robert Covington Terrence Jones Jaylen Brown Robbie Hummel Joakim Noah Steven Adams Anthony Bennett Phil Pressey Joe Johnson Arinze Onuaku Coty Clarke Chris Paul Seth Curry Monta Ellis Dwight Buycks Chasson Randle Brandan Wright Kentavious Caldwell-Pope Maurice Harkless Jeremy Evans Nikola Vucevic Ronnie Price Andre Miller JJ Hickson Andrew Bogut Andrew Nicholson Julius Randle Markieff Morris Chandler Parsons Timofey Mozgov Jason Richardson Anderson Varejao James Ennis III Manu Ginobili Jodie Meeks Ivica Zubac Jrue Holiday Richaun Holmes Luis Montero Jordan Crawford Caris LeVert Pierre Jackson Nik Stauskas Darren Collison Wilson Chandler Josh Huestis Patrick Patterson Damjan Rudez Jakob Poeltl Jeff Green Pau Gasol Rodney Hood Willie Cauley-Stein Greg Stiemsma Rudy Gobert Jerryd Bayless Kenneth Faried Jason Thompson Paul Zipser Glenn Robinson III Trevor Booker Jonas Jerebko Marc Gasol Toney Douglas Tyus Jones Richard Jefferson Archie Goodwin Mirza Teletovic Jordan Hamilton Thabo Sefolosha John Jenkins Landry Fields Brandon Ingram Joe Young Samuel Dalembert Pero Antic Lou Amundson Jeremy Lamb Dwight Powell Luol Deng Patty Mills Anthony Tolliver Thanasis Antetokounmpo Rashad Vaughn Mike Conley Enes Kanter Devin Booker Elfrid Payton Langston Galloway D'Angelo Russell Will Barton Kostas Papanikolaou Kevin Martin Ryan Kelly James Michael McAdoo Kent Bazemore Al-Farouq Aminu Paul Pierce Josh Smith Josh McRoberts James Johnson Ben McLemore Mindaugas Kuzminskas David Lee Rudy Gay George Hill Jordan Mickey Jameer Nelson Cleanthony Early Myles Turner Jason Terry John Salmons James Anderson Kyle Lowry Tayshaun Prince Gigi Datome Leandro Barbosa Nate Robinson Figure 1: Illustrations of static entity embeddings e. Players with colored letters are listed in the ranking top 100 players for the 2016-17 NBA season at https: //www.washingtonpost.com/graphics/ sports/nba-top-100-players-2016/. Only LeBron James is in red and the other players in top 100 are in blue. Top-ranked players have similar representations of e. 1901). As shown in Figure 1, which is the visualization of static entity embedding e, the topranked players are closely located. We also present the visualizations of dynamic entity embeddings ¯e in Figure 2. Although we did not carry out feature engineering specific to the NBA (e.g., whether a player scored double digits or not)8 for representing the dynamic entity embedding ¯e, the embeddings of the players who performed well for each game have similar representations. In addition, the change in embeddings of the same player was observed depending on the box-scores for each game. For instance, LeBron James recorded a double-double in a game on April 22, 2016. For this game, his embedding is located close to the embedding of Kevin Love, who also scored a double-double. However, he did not participate in the game on December 26, 2016. His embedding for this game became closer to those of other players who also did not participate. 6.3 Duplicate ratios of extracted relations As Puduppully et al. (2019) pointed out, a generated summary may mention the same relation multiple times. Such duplicated relations are not favorable in terms of the brevity of text. Figure 3 shows the ratios of the generated summaries with duplicate mentions of relations in the development data. While the models by Wiseman et al. (2017) and Puduppully et al. (2019) respec8In the NBA, a player who accumulates a double-digit score in one of five categories (points, rebounds, assists, steals, and blocked shots) in a game, is regarded as a good player. If a player had a double in two of those five categories, it is referred to as double-double. 2109 4 2 0 2 4 April 22, 2016 4 2 0 2 4 LeBron James Kevin Love Tristan Thompson JR Smith Kyrie Irving Iman Shumpert Richard Jefferson Channing Frye Matthew Dellavedova Dahntay Jones James Jones Timofey Mozgov Mo Williams Tobias Harris Marcus Morris Andre Drummond Kentavious Caldwell-Pope Reggie Jackson Anthony Tolliver Steve Blake Aron Baynes Stanley Johnson Joel Anthony Spencer Dinwiddie Darrun Hilliard Jodie Meeks 4 2 0 2 4 6 December 26, 2016 Richard Jefferson Kevin Love Tristan Thompson DeAndre Liggins Kyrie Irving Iman Shumpert Kay Felder Mike Dunleavy Channing Frye James Jones Jordan McRae LeBron James JR Smith Jon Leuer Marcus Morris Andre Drummond Kentavious Caldwell-Pope Reggie Jackson Tobias Harris Ish Smith Stanley Johnson Aron Baynes Darrun Hilliard Henry Ellenson Michael Gbinije Beno Udrih Figure 2: Illustrations of dynamic entity embedding ¯e. Both left and right figures are for Cleveland Cavaliers vs. Detroit Pistons, on different dates. LeBron James is in red letters. Entities with orange symbols appeared only in the reference summary. Entities with blue symbols appeared only in the generated summary. Entities with green symbols appeared in both the reference and the generated summary. The others are with red symbols. 2 represents player who scored in the double digits, and 3 represents player who recorded double-double. Players with △did not participate in the game. ◦represents other players. Wiseman+'17 Pudupully+'19 Proposed 0.00 0.25 0.50 0.75 1.00 64.0% 20.2% 15.8% 84.2% 12.7% 95.8% 1 2 > 2 Figure 3: Ratios of generated summaries with duplicate mention of relations. Each label represents number of duplicated relations within each document. While Wiseman et al. (2017)’s model exhibited 36.0% duplication and Puduppully et al. (2019)’s model exhibited 15.8%, our model exhibited only 4.2%. tively showed 36.0% and 15.8% as duplicate ratios, our model exhibited 4.2%. This suggests that our model dramatically suppressed generation of redundant relations. We speculate that the tracking model successfully memorized which input records have been selected in hENT s . 6.4 Qualitative analysis of output examples Figure 5 shows the generated examples from validation inputs with Puduppully et al. (2019)’s model and our model. Whereas both generations seem to be fluent, the summary of Puduppully et al. (2019)’s model includes erroneous relations colored in orange. Specifically, the description about DERRICK ROSE’s relations, “15 points, four assists, three rounds and one steal in 33 minutes.”, is also used for other entities (e.g., JOHN HENSON and WILLY HERNAGOMEZ). This is because Puduppully et al. (2019)’s model has no tracking module unlike our model, which mitigates redundant references and therefore rarely contains erroneous relations. However, when complicated expressions such as parallel structures are used our model also generates erroneous relations as illustrated by the underlined sentences describing the two players who scored the same points. For example, “11-point efforts” is correct for COURTNEY LEE but not for DERRICK ROSE. As a future study, it is necessary to develop a method that can handle such complicated relations. 6.5 Use of writer information We first look at the results of an extension of Puduppully et al. (2019)’s model with writer information w in Table 4. By adding w to content planning (stage 1), the method obtained improvements in CS (37.60 to 47.25), CO (16.97 to 22.16), and BLEU score (13.96 to 18.18). By adding w to the component for surface realization (stage 2), the method obtained an improvement in BLEU score (13.96 to 17.81), while the effects on the other metrics were not very significant. By adding w to both stages, the method scored the highest BLEU, while the other metrics were not very different from those obtained by adding w to stage 1. This result suggests that writer information contributes to both content planning and surface realization when it is properly used, and improvements of content planning lead to much better performance in surface realization. Our model showed improvements in most metrics and showed the best performance by incor2110 Method RG CS CO BLEU # P% P% R% F1% DLD% Puduppully et al. (2019) 33.06 83.17 33.06 43.59 37.60 16.97 13.96 + w in stage 1 28.43 84.75 45.00 49.73 47.25 22.16 18.18 + w in stage 2 35.06 80.51 31.10 45.28 36.87 16.38 17.81 + w in stage 1 & 2 28.00 82.27 44.37 48.71 46.44 22.41 18.90 PROPOSED 39.05 94.38 35.77 52.05 42.40 19.38 16.15 + w 30.25 92.00 50.75 59.03 54.58 25.75 20.84 Table 4: Effects of writer information. w indicates that WRITER embeddings are used. Numbers in bold are the largest among the variants of each method. The Milwaukee Bucks defeated the New York Knicks, 105104, at Madison Square Garden on Wednesday evening. The Bucks (18-16) have been one of the hottest teams in the league, having won five of their last six games, and they have now won six of their last eight games. The Knicks (16-19) have now won six of their last six games, as they continue to battle for the eighth and final playoff spot in the Eastern Conference. Giannis Antetokounmpo led the way for Milwaukee, as he tallied 27 points, 13 rebounds, four assists, three blocked shots and one steal, in 39 minutes . Jabari Parker added 15 points, four rebounds, three assists, one steal and one block, and 6-of-8 from long range. John Henson added two points, two rebounds, one assist, three steals and one block. John Henson was the only other player to score in double digits for the Knicks, with 15 points, four assists, three rebounds and one steal, in 33 minutes. The Bucks were led by Derrick Rose, who tallied 15 points, four assists, three rebounds and one steal in 33 minutes. Willy Hernangomez started in place of Porzingis and finished with 15 points, four assists, three rebounds and one steal in 33 minutes. Willy Hernangomez started in place of Jose Calderon ( knee ) and responded with one rebound and one block. The Knicks were led by their starting backcourt of Carmelo Anthony and Carmelo Anthony, but combined for just 13 points on 5-of-16 shooting. The Bucks next head to Philadelphia to take on the Sixers on Friday night, while the Knicks remain home to face the Los Angeles Clippers on Wednesday. (a) Puduppully et al. (2019) The Milwaukee Bucks defeated the New York Knicks, 105-104, at Madison Square Garden on Saturday. The Bucks (18-16) checked in to Saturday’s contest with a well, outscoring the Knicks (16-19) by a margin of 39-19 in the first quarter. However, New York by just a 25-foot lead at the end of the first quarter, the Bucks were able to pull away, as they outscored the Knicks by a 59-46 margin into the second. 45 points in the third quarter to seal the win for New York with the rest of the starters to seal the win. The Knicks were led by Giannis Antetokounmpo, who tallied a game-high 27 points, to go along with 13 rebounds, four assists, three blocks and a steal. The game was a crucial night for the Bucks’ starting five, as the duo was the most effective shooters, as they posted Milwaukee to go on a pair of low low-wise (Carmelo Anthony) and Malcolm Brogdon. Anthony added 11 rebounds, seven assists and two steals to his team-high scoring total. Jabari Parker was right behind him with 15 points, four rebounds, three assists and a block. Greg Monroe was next with a bench-leading 18 points, along with nine rebounds, four assists and three steals. Brogdon posted 12 points, eight assists, six rebounds and a steal. Derrick Rose and Courtney Lee were next with a pair of {11 / 11} -point efforts. Rose also supplied four assists and three rebounds, while Lee complemented his scoring with three assists, a rebound and a steal. John Henson and Mirza Teletovic were next with a pair of {two / two} -point efforts. Teletovic also registered 13 points, and he added a rebound and an assist. Jason Terry supplied eight points, three rebounds and a pair of steals. The Cavs remain in last place in the Eastern Conference’s Atlantic Division. They now head home to face the Toronto Raptors on Saturday night. (b) Our model Table 5: Example summaries generated with Puduppully et al. (2019)’s model (left) and our model (right). Names in bold face are salient entities. Blue numbers are correct relations derived from input data records but are not observed in reference summary. Orange numbers are incorrect relations. Green numbers are correct relations mentioned in reference summary. porating writer information w. As discussed in Section 4.5, w is supposed to affect both content planning and surface realization. Our experimental result is consistent with the discussion. 7 Conclusion In this research, we proposed a new data-to-text model that produces a summary text while tracking the salient information that imitates a humanwriting process. As a result, our model outperformed the existing models in all evaluation measures. We also explored the effects of incorporating writer information to data-to-text models. With writer information, our model successfully generated highest quality summaries that scored 20.84 points of BLEU score. Acknowledgments We would like to thank the anonymous reviewers for their helpful suggestions. This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO), JST PRESTO (Grant Number JPMJPR1655), and AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL). 2111 References Tatsuya Aoki, Akira Miyazawa, Tatsuya Ishigaki, Keiichi Goshima, Kasumi Aoki, Ichiro Kobayashi, Hiroya Takamura, and Yusuke Miyao. 2018. Generating Market Comments Referring to External Resources. In Proceedings of the 11th International Conference on Natural Language Generation, pages 135–139. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the Third International Conference on Learning Representations. Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 331–338. Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, and Yejin Choi. 2018. Simulating Action Dynamics with Neural Process Networks. In Proceedings of the Sixth International Conference on Learning Representations. David L Chen and Raymond J Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In Proceedings of the 25th international conference on Machine learning, pages 128–135. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724– 1734. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Elizabeth Clark, Yangfeng Ji, and Noah A Smith. 2018. Neural Text Generation in Stories Using Entity Representations as Context. In Proceedings of the 16th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2250–2260. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1631–1640. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the Unknown Words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 140–149. Luong Hoang, Sam Wiseman, and Alexander Rush. 2018. Entity Tracking Improves Cloze-style Reading Comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1049–1055. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A Smith. 2017. Dynamic Entity Representations in Neural Language Models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1830–1839. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 329–339. Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. 2016. Dynamic entity representation with max-pooling improves machine reading. In Proceedings of the 15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 850–855. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural Text Generation from Structured Data with Application to the Biography Domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 91–99. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text Generation by Structure-aware Seq2seq Learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. 2112 Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment. In Proceedings of the 15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 720–730. Soichiro Murakami, Akihiko Watanabe, Akira Miyazawa, Keiichi Goshima, Toshihiko Yanase, Hiroya Takamura, and Yusuke Miyao. 2017. Learning to generate market comments from stock prices. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1374–1384. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Karl Pearson. 1901. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559–572. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-Text Generation with Content Selection and Planning. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. 2018. On the convergence of adam and beyond. In Proceedings of the Sixth International Conference on Learning Representations. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Orderplanning neural text generation from structured data. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Kumiko Tanaka-Ishii, Kˆoiti Hasida, and Itsuki Noda. 1998. Reactive content selection in the generation of real-time soccer commentary. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 1282– 1288. Yasufumi Taniguchi, Yukun Feng, Hiroya Takamura, and Manabu Okumura. 2019. Generating Live Soccer-Match Commentary from Play Data. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2017. Context gates for neural machine translation. Transactions of the Association for Computational Linguistics, 5:87–99. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory Networks. In Proceedings of the Third International Conference on Learning Representations. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in Data-to-Document Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263. Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-Aware Language Models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1850–1859. A Algorithm The generation process of our model is shown in Algorithm 1. For a concise description, we omit the condition for each probability notation. <SOD> and <EOD> represent “start of the document” and “end of the document”, respectively. B Experimental settings We set the dimensions of the embeddings to 128, and those of the hidden state of RNN to 512 and all of parameters are initialized with the Xavier initialization (Glorot and Bengio, 2010). We set the maximum number of epochs to 30, and choose the model with the highest BLEU score on the development data. The initial learning rate is 2e-3 and AMSGrad is also used for automatically adjusting the learning rate (Reddi et al., 2018). Our implementation uses DyNet (Neubig et al., 2017). 2113 Algorithm 1: Generation process Input: Data records s, Annotations Z1:T , E1:T , A1:T , N1:T 1 Initialize {re,a,v}r∈x, {¯e}e∈E, hLM 0 , hENT 0 2 t ←0 3 et, yt ←NONE, < SOD > 4 while yt ̸=< EOD > do 5 t ←t + 1 6 if p(Zt = 1) ≥0.5 then /* Select the entity */ 7 et ←arg max p(Et = e′ t) 8 if et ̸∈Et−1 then /* If et is a new entity */ 9 hENT′ t ←GRUE(¯et, hENT t−1) 10 Et ←Et−1 ∪{et} 11 else if et ̸= et−1 then /* If et has been observed before, but is different from the previous one. */ 12 hENT’ t ←GRUE(W ShENT s , hENT t−1), 13 where s = max{s : s ≤t −1, e = es} 14 else 15 hENT’ t ←hENT t−1 /* Select an attribute for the entity, et. */ 16 at ←arg max p(At = a′ t) 17 hENT t ←GRUA(ret,at,x[et,at], hENT′ t ) 18 if at is a number attribute then 19 if p(Nt = 1) ≥0.5 then 20 yt ←numeral of x[et, at] 21 else 22 yt ←x[et, at] 23 end 24 else 25 yt ←x[et, at] 26 h′ t ←tanh W H(hLM t−1 ⊕hENT t )  27 hLM t ←LSTM(yt ⊕h′ t, hLM t−1) 28 else 29 et, at, hENT t ←et−1, at−1, hENT t−1 30 h′ t ←tanh W H(hLM t−1 ⊕hENT t )  31 yt ←arg max p(Yt) 32 hLM t ←LSTM(yt ⊕h′ t, hLM t−1) 33 end 34 if yt is “.” then 35 hENT t ←GRUA(vREFRESH, hENT t ) 36 end 37 return y1:t−1;
2019
202
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2114–2124 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2114 Reinforced Dynamic Reasoning for Conversational Question Generation Boyuan Pan1∗, Hao Li1, Ziyu Yao2, Deng Cai1,3, Huan Sun2 1State Key Lab of CAD&CG, Zhejiang University 2The Ohio State University 3Alibaba-Zhejiang University Joint Institute of Frontier Technologies {panby, haolics, dcai}@zju.edu.cn {yao.470, sun.397}@osu.edu Abstract This paper investigates a new task named Conversational Question Generation (CQG) which is to generate a question based on a passage and a conversation history (i.e., previous turns of question-answer pairs). CQG is a crucial task for developing intelligent agents that can drive question-answering style conversations or test user understanding of a given passage. Towards that end, we propose a new approach named Reinforced Dynamic Reasoning (ReDR) network, which is based on the general encoder-decoder framework but incorporates a reasoning procedure in a dynamic manner to better understand what has been asked and what to ask next about the passage. To encourage producing meaningful questions, we leverage a popular question answering (QA) model to provide feedback and fine-tune the question generator using a reinforcement learning mechanism. Empirical results on the recently released CoQA dataset demonstrate the effectiveness of our method in comparison with various baselines and model variants. Moreover, to show the applicability of our method, we also apply it to create multiturn question-answering conversations for passages in SQuAD. 1 Introduction In this work, we study a novel task of conversational question generation (CQG) which is given a passage and a conversation history (i.e., previous turns of question-answer pairs), to generate the next question. CQG is an important task in its own right for measuring the ability of machines to lead a question-answering style conversation. It can serve as an essential component of intelligent social bots or tutoring systems, asking meaningful ∗Work done while visiting the Ohio State University. Shelly is in second grade. She is a new student at her school. Shelly's family has lived in many different places. Shelly was born in Florida. Her family moved to Tennessee when she was two years old. When she was four years old, they moved to Texas. They moved from there to Arizona, where they now live. Q1: What grade is Shelly in ? A1: second R1: Shelly is in second grade. Q2: Was she a new student ? A2: Yes R2: She is a new student at her school. Q3: Where did she move at 2 years old ? A2: Tennessee R3: Her family moved to Tennessee when she was two years old. Figure 1: An example from the CoQA dataset. Each turn contains a question (Q) and an answer (A). The dataset also provides a rationale (R) (i.e., a text span from the passage) to support each answer. and coherent questions to engage users or test student understanding about a certain topic. On the other hand, as shown in Figure 1, large-scale highquality conversational question answering (CQA) datasets such as CoQA (Reddy et al., 2018) and QuAC (Choi et al., 2018) can help train models to answer sequential questions. However, manually creating such datasets is quite costly, e.g., CoQA spent 3.6 USD per passage on crowdsourcing for conversation collection, and automatic CQG can potentially help reduce the cost, especially when there are a large set of passages available. In recent years, automatic question generation (QG), which aims to generate natural questions based on a certain type of data sources including structured knowledge bases (Serban et al., 2016b; Guo et al., 2018) and unstructured texts (Rus et al., 2115 2010; Heilman and Smith, 2010; Du et al., 2017; Du and Cardie, 2018), has been widely studied. However, previous works mainly focus on generating standalone and independent questions based on a given passage. To the best of our knowledge, we are the first to explore CQG, i.e., generating the next question based on a passage and a conversation history. Comparing with previous QG tasks, CQG needs to take into account not only the given passage, but also the conversation history, and is potentially more challenging as it requires a deep understanding of what has been asked so far and what information should be asked for the next round, in order to make a coherent conversation. In this paper, we present a novel framework named Reinforced Dynamic Reasoning (ReDR) network. Inspired by the recent success of reading comprehension models (Xiong et al., 2017; Seo et al., 2017), ReDR adapts their reasoning procedure (which encodes the knowledge of the passage and the conversation history based on a coattention mechanism) and moreover dynamically updates the encoding representation based on a soft decision maker to generate a coherent question. In addition, to encourage ReDR to generate meaningful and interesting questions, ideally, one may employ humans to provide feedback, but as widely acknowledged, involving humans in the loop for training models can be very costly. Therefore, in this paper, we leverage a popular and effective reading comprehension (or QA) model (Chen et al., 2017) to predict the answer to a generated question and use its answer quality (which can be seen as a proxy for real human feedback) as rewards to fine-tune our model based on a reinforcement learning mechanism (Williams, 1992). Our contributions are summarized as follows: • We introduce a new task of Conversational Question Generation (CQG), which is crucial for developing intelligent agents to drive question-answering style conversations and can potentially provide valuable datasets for future relevant research. • We propose a new and effective framework for CQG, which is equipped with a dynamic reasoning component to generate a conversational question and is further fine-tuned via a reinforcement learning mechanism. • We show the effectiveness of our method using the recent CoQA dataset. Moreover, we show its wide applicability by using it to create multi-turn QA conversations for passages in SQuAD (Rajpurkar et al., 2016). 2 Task Definition Formally, we define the task of Conversational Question Generation (CQG) as: Given a passage X and the previous turns of questionanswer pairs {(q1, a1), (q2, a2), ..., (qk−1, ak−1)} about X, CQG aims to generate the next question qk that is related to the given passage and coherent with the previous questions and answers, i.e., qk = arg max qk P(qk|X, q<k, a<k) (1) where P(qk|X, q<k, a<k) is a conditional probability of generating the question qk. 3 Methodology We show our proposed framework named Reinforced Dynamic Reasoning (ReDR) network in Figure 2. Since a full passage is usually too long and makes it hard to focus on the most relevant information for generating the next question, our method first selects a text span from the passage as the rationale at each conversation turn, and then dynamically models the reasoning procedure for encoding the conversation history and the selected rationale, before finally decoding the next question. 3.1 Rationale Selection We simply set each sentence in the passage as the corresponding rationale for each turn of the conversation. When experimenting with CoQA, we use the rationale span provided in the dataset. Besides for simplicity and efficiency, another reason that we adopt this rule-based method is that previous research demonstrated that the transition of the dialog attention is smooth (Reddy et al., 2018; Choi et al., 2018), meaning that earlier questions in a conversation are usually answerable by the preceding part of the passage while later questions tend to focus on the ending part of the passage. The selected rationale is then leveraged by subsequent modules for question generation. 3.2 Encoding & Reasoning At each turn k, we denote the conversation history as a sequence of m tokens, i.e., c = 2116 Reasoning Procedure Reasoning Procedure Conversation History Rationale Alignment Product Product Integration Reasoning Procedure Reasoning Procedure C U" U# 𝑝% 𝑝% R U& R C U& H G Decoder q QA Model Question Policy Gradient by Reward Figure 2: Overview of our Reinforced Dynamic Reasoning (ReDR) network. The reasoning mechanism iteratively reads the conversation history and at each iteration, its output is dynamically combined with the previous encoding representation through a soft decision maker (pd) as the new encoding representation, which is fed into the next iteration. The model is finally fine-tuned by the reward defined by the quality of the answer predicted from a QA model. {c1, c2, ..., cm}, which concatenates the previous questions and answers <q1, a1, ..., qk−1, ak−1>, and represent the rationale as a sequence of n tokens, i.e., r = {r1, r2, ..., rn}. As mentioned earlier, different from previous question generation tasks, we have two knowledge sources (i.e., the conversation history and the rationale) as the inputs. A good encoding of them is crucial for task performance and might involve a reasoning procedure across previous question-answer pairs and the selected rationale for determining the next question. We feed them respectively into a bidirectional LSTM and obtain their contextual representations C ∈Rd×m and R ∈Rd×n. Inspired by the coattention reasoning mechanism in previous reading comprehension works (Xiong et al., 2017; Seo et al., 2017; Pan et al., 2017), we compute an alignment matrix of C and R to link and fuse the information flow: S = R⊤C ∈Rn×m. We normalize this alignment matrix column-wise (i.e., softmax(S)) to obtain the relevance degree of each token in the conversation history to the whole rationale. The new representation of the conversation history w.r.t. the rationale is obtained via: H = R · softmax(S) ∈Rd×m (2) Similarly, we compute the attention over the conversation history for each word in the rationale via softmax(S⊤) and obtain the contextdependent representation of the rationale by C · softmax(S⊤). In addition, as in (Xiong et al., 2017), we also consider the above new representation of the conversation history and map it to the space of rationale encodings via H · softmax(S⊤), and finally obtain the codependent representation of the rationale and the conversation history: G = [C; H] · softmax(S⊤) ∈R2d×n (3) where [; ] means concatenation across row dimension. To deeply capture the interaction between the rationale and the conversation history, we feed the co-dependent representation G combined with the rationale R into an integration model instantiated by a bi-directional LSTM: u0 i = BiLSTM(u0 i−1, u0 i+1, [Gi; Ri]) ∈Rd (4) We define the reasoning process in our paper as Eqn. (2-4), and now obtain a matrix U0 = [u0 1, u0 2, ..., u0 n] as the encoding representation after one-layer reasoning procedure, which can be fed into the decoder subsequently. 3.3 Dynamic Reasoning Oftentimes the conversation history is very informative and complicated, and one single layer of reasoning may be insufficient to comprehend the subtle relationship among the rationale, the conversation history, and the to-be-generated question. Therefore, we propose a dynamic reasoning procedure to iteratively update the encoding representation. We regard U0 as a new representation 2117 of the rationale and input it to the next layer of reasoning together with C: eU1 = Freason(U0, C) (5) where Freason is the reasoning procedure (Eqn. 24), and eU1 is the hidden states of the BiLSTM integration model at the next reasoning layer. To effectively learn what information in eU1 and U0 is relevant to keep, we use a soft decision maker to determine their weights: U1 = pd ⊙U0 + (e1 −pd) ⊙eU1 pd = σ(w⊤ u U0 + w⊤ g G + w⊤ r R + b) (6) where e1 is an all-ones vector, and wu, wg, wr, b are trainable parameters. pd ∈Rn is the decision maker, used as a soft switch to choose between different levels of reasoning. U1 is the representation to be used for the next layer of reasoning. This iterative procedure halts when a maximum number of reasoning layers N is reached (N ≥1). The final representation UN is fed into the decoder. 3.4 Decoding The decoder generates a word by sampling from the probability Pgen(yt|y<t, c, r) which can be computed via: Pgen(yt|y<t, c, r) = MLP(ot, vt) ot = LSTM(ot−1, Emb(yt−1), vt−1) (7) where MLP stands for a standard multilayer perceptron network, yt is the t-th word in the generated question, ot is the hidden state of the decoder at time step t, and Emb(·) indicates the word embedding. vt is an attentive read of the encoding representation: vt = Pn i=1 αt,iuN i , where the weight αt,i ∈(0, 1) is scored by another MLP(ot, uN i ) network. Observing that a question may share common words with the rationale that it is based on and inspired by the widely adopted copy mechanism (Gu et al., 2016; See et al., 2017), we also apply a pointer network for the generator to copy words from the rationale. Now the probability of generating target word yt becomes: P(yt|y<t, c, r) = λPgen(yt) + (1 −λ)Ppt(yt) (8) where Pgen(yt)=Pgen(yt|y<t, c, r) is defined earlier, Ppt(yt) = P i:ri=yt αt,i is the probability of copying word yt from r (only if r contains yt), and λ is the weight to balance the two: λ = σ(w⊤ v vt + w⊤ o ot + w⊤ y Emb(yt−1) + bpt) (9) where w⊤ v , w⊤ o , w⊤ y and bpt are to be learnt. To optimize all parameters in ReDR, we adopt the maximum likelihood estimation (MLE) approach, i.e., maximizing the summed log likelihood of words in a target question. 3.5 Reinforcement Learning for Fine-tuning As shown by recent datasets like CoQA and QuAC, human-created questions tend to be meaningful and interesting. For example, in Figure 1, given the second rationale R2 “She is a new student at her school”, humans tend not to ask “Where is she?”, and similarly given R3, they usually do not create the question “What happened?”. Although both are legitimate questions, they tend to be less interesting and meaningful compared with the human-created ones shown in Figure 1. The interestingness or meaningfulness of a question is subjective and hard to define, automatically measuring which is a difficult problem itself. Ideally, one can involve humans in the loop to judge the generated question and provide feedback, but it can be very costly, if not impossible. Driven by such observations, we use the REINFORCE (Williams, 1992) algorithm and adopt one of the state-of-the-art reading comprehension models DrQA (Chen et al., 2017) as a substitute for humans to provide feedback to the question generator. DrQA answers a question based on the given passage and has achieved a competitive performance on CoQA (Reddy et al., 2018). During training, we apply DrQA to answer a generated question, and compare its answer with the human-provided answer (which is associated with the same rationale for generating the question)1. If the answers match well with each other, we regard our generator produces a meaningful question since it asks about the same thing as humans do, and will assign high rewards to such questions. Formally, we minimize the negative expected reward for a generated question: JRL = −Eq∼π(q|r,c)[R(a, a∗)] (10) where π(q|r, c) = Q t P(yt|y<t, c, r) is the action policy defined in Eqn. (8) for producing question 1We use the CoQA dataset for training and such information is available as shown in Figure 1. 2118 Dataset Passages QA Turns per Pairs Passage Training 7199 10.8k 15.0 Dev 500 8.0k 15.9 Table 1: Statistics of the CoQA dataset. q given rationale r and conversation history c, and R(a, a∗) is the reward function defined by the F1 score2 between the DrQA predicted answer a and the human-provided answer a∗. For computational efficiency concerns, during training, we make sure that the ground-truth question is in the sampling pool and use beam search to generate 5 more questions. Note that besides providing rewards for finetuning our generator, DrQA model also serves another purpose: When applying our framework to any passage, we can use DrQA to produce an answer to the currently generated question so that the conversation history can be updated for the next-turn of question generation. In addition, our framework is not limited to DrQA and other more advanced QA models can apply as well. 4 Experiments 4.1 Dataset We use the CoQA dataset3 (Reddy et al., 2018) to experiment with our ReDR and baseline methods. CoQA contains text passages from diverse domains, conversational questions and answers developed for each passage, as well as rationales (i.e., text spans extracted from given passages) to support answers. The dataset consists of 108k questions in the training set and 8k questions in the development (dev) set with a large hidden test set for competition purpose, and our results are shown on the dev set. 4.2 Baselines As discussed earlier, CQG has been underinvestigated so far, and there are few existing baselines for our comparison. Because of their high relevance with our task as well as their superior performance demonstrated by previous works, we choose to compare with the following models: 2F1 score is the common evaluation metric for QA and is defined as the harmonic mean of precision and recall. 3https://stanfordnlp.github.io/coqa/ Seq2Seq (Sutskever et al., 2014) is a basic encoder-decoder sequence learning system, which has been widely used for machine translation (Luong et al., 2015) and dialogue generation (Wen et al., 2017). We concatenate the rationale and the conversation history as the input sequence in our setting. NQG (Du et al., 2017) is a strong attentionbased neural network approach for question generation task. The input is the same as the above Seq2Seq model. 4.3 Implementation Details Our word embeddings are initialized by glove.840B.300d (Pennington et al., 2014). We set the LSTM hidden unit size to 500 and set the number of layers of LSTMs to 2 in both the encoder and the decoder. Optimization is performed using stochastic gradient descent (SGD), with an initial learning rate of 1.0. The learning rate starts decaying at the step 15000 with a decay rate of 0.95 for every 5000 steps. The mini-batch size for the update is set at 64. We set the dropout (Srivastava et al., 2014) ratio as 0.3 and the beam size as 5. The maximum number of iterations for the dynamic reasoning is set to be 3. Since the CoQA contains abstractive answers, we apply DrQA as our question answering model and follow Yatskar (2018) to separately train a binary classifier to produce “yes” or “no” for yes/no questions4. Code is available at https: //github.com/ZJULearning/ReDR. 4.4 Automatic Evaluation Metrics We follow previous question generation work (Xu et al., 2017; Du et al., 2017) to use BLEU5 (Papineni et al., 2002) and ROUGE-L (Lin, 2004) to measure the relevance between the generated question and the ground-truth one. To evaluate the diversity of the generated questions, we follow (Li et al., 2016a) to calculate Dist-n (n=1,2), which is the proportion of unique n-grams over the total number of n-grams in the generated questions for all passages, and (Zhang et al., 2018) to use the Ent-n (n=4) metric, which reflects how evenly the n-gram distribution is over all generated questions. For all the metrics, the larger they are, 4Our modified DrQA model achieves 68.8 F1 scores on the CoQA dev set. 5We adopt the 4th smoothing technique as proposed in (Chen and Cherry, 2014) for short text generation. 2119 Models Relevance Diversity BLEU RG-L Dist-1 Dist-2 Ent-4 Vanilla Seq2Seq Model 7.64 26.68 0.010 0.034 3.370 NQG (Du et al., 2017) 13.97 31.75 0.017 0.068 6.518 With 1 Layer Reasoning, no RL 16.13 32.24 0.053 0.171 7.862 With 2 Layer Reasoning, no RL 17.85 33.06 0.062 0.216 8.285 With 3 Layer Reasoning, no RL 17.42 32.88 0.061 0.205 8.247 With Dynamic Reasoning, no RL 19.10 33.57 0.064 0.220 8.304 Reinforced Dynamic Reasoning (ReDR) 19.69 34.05 0.069 0.225 8.367 Table 2: Quantitative evaluation for conversational question generation using CoQA dataset. the more relevant or diverse the generated questions are. Results and Analysis Table 2 shows the performance of various models on the CoQA dataset. As we can see, our model ReDR and its variants perform much better than the baselines, which indicates that the reasoning procedure can significantly boost the quality of the encoding representations and thus improve the question generation performance. To investigate the effect of the reasoning procedure and fine-tuning in our model design, we also conduct an ablation study: (1) We first test our model with only one layer of reasoning, i.e., directly feeding the encoding representation U0 into the decoder. The results drop a lot on all the metrics, which indicates that there is abundant semantic information in the input text so the multi-layer reasoning is necessary. (2) We then augment our model with two or three layers of reasoning but without the decision maker pd. In other words, we directly use the hidden states of the integration LSTM as the input to the next reasoning layer (formally, U j = ˜U j). We can see that the performance of our model increases with a two-layer reasoning while decreases with a three-layer reasoning. We conjecture that the two-layer reasoning network is saturated for most of the input text sequences, thus directly adding a layer of network for all the input text seems not optimal. (3) When we add the decision maker to dynamically compute the encoding representations, the results are greatly improved, which demonstrates that using a dynamic procedure can distribute proper weight of each layer to the input sequences in different lengths and amount of information. (4) Finally, we fine-tune the model with the reinforcement learning framework, and the results show that using the NQG ReDR Human Naturalness 1.94 1.92 2.14 Relevance 1.16 2.02 2.82 Coherence 1.12 1.94 2.94 Richness 1.16 2.30 2.54 Answerability 1.18 1.86 2.96 Table 3: Human evaluation results on CoQA. “Human” in the table means the original human-created questions in CoQA. answer quality as the reward is helpful for generating better questions. 4.5 Human Evaluation We conduct human evaluation to measure the quality of generated questions. We randomly sampled 50 questions along with their conversation history and the passage, and consider 5 aspects: Naturalness, which indicates the grammaticality and fluency; Relevance, which indicates the connection with the topic of the passage; Coherence, which measures whether the generated question is coherent with the conversation history; Richness, which measures the amount of information contained in the question. Answerability, which indicates whether the question is answerable based on the passage. For each sample, 5 people 6 are asked to rank three questions (the ReDR question, the NQG question and the human-created question) by assigning each a score from {1,2,3} (the higher, the better). For each aspect, we show the average score across the five annotators on all samples. Table 3 shows the results of human evaluation. We can see that our method almost outperforms NQG in all aspects. For Naturalness, the three 6All annotators are native English speakers. 2120 Category NQG ReDR Human Question Type “what” Question 0.45 0.42 0.35 “which” Question 0.01 0.01 0.02 “when” Question 0.07 0.05 0.04 “where” Question 0.08 0.06 0.07 “who” Question 0.06 0.22 0.15 “why” Question 0.15 0.03 0.03 yes/no Question 0.08 0.07 0.21 Linguistic Feature Question Length 4.05 5.34 6.48 Explicit Coref. 0.51 0.53 0.47 Implicit Coref. 0.32 0.19 0.19 Table 4: Linguistic statistics for the generated questions and the human annotated questions in CoQA. methods obtain the similar scores, which is probably because that the most generated questions are short and fluent, makes them have no significant difference on this aspect. We also observe that on the Relevance, Coherence and Answerability aspects, there is an obvious gap between the generative models and human annotation. This indicates that the contextual understanding is still a challenging problem for the task of the conversational question generation. 4.6 Linguistic Analysis We further analyze the generated questions in terms of their linguistic features and constitutions in Table 4, from which we draw three observations: (1) Overall, the distribution of the major types of questions generated by ReDR is closer to human-created questions, in comparison with NQG. For example, ReDR generates a large portion of “what” and “who” questions, similarly as humans. (2) We observe that NQG tends to generate many single-word questions such as “Why?” while our method successfully alleviates this problem. (3) Both ReDR and NQG generate fewer yes/no questions than humans, as a result of generating more “wh”-type of questions. For the relationship between a question and its conversation history, following the analysis in CoQA, we randomly sample 150 questions respectively from each method and observe that about 50% questions generated by ReDR contain explicit coreference markers such as “he”, “she” or “it”, which is similar to the other two methods. Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton. Cotton lived high up in a nice warm place above the barn where all of the farmer's horses slept. But Cotton wasn't alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters... OQ1: What color was cotton ? A1: white NQG: What type of animal was it ? ReDR: What was the animal 's name ? OQ2: Where did she live ? A2: in a barn NQG: What was it ? ReDR: What kind of house did she live ? OQ3: Did she live alone ? A3: no NQG: Why ? ReDR: Was she alone ? OQ4: Who did she live with? A4: with her mommy and 5 sisters NQG: What does she do ? ReDR: Who else ? Figure 3: Example questions generated by human (i.e., original questions denoted as OQ), NQG and our ReDR on CoQA. However, NQG generates much more questions consisting of implicit coreference markers like “Where?” or “Who?”, which can be less meaningful or not answerable as also verified in Table 3. 4.7 Case Study In Figure 3, we show the output questions of our ReDR and NQG on an example from CoQA dataset. For the first turn, both ReDR and NQG generate a meaningful and answerable question. For the second turn, NQG generates “What was it?”, which is answerable and related to the conversation history but simpler than our question “What kind of house did she live?”. For the third turn, NQG generates a coherent but less meaningful question “Why?”, while our method generates “Was she alone?”, which is very similar to the human-created question. For the last turn, NQG produces a question that is neither coherent nor answerable, while ReDR asks a much better question “Who else?”. To show the applicability of ReDR to generate QA style conversations on any passages, we apply it to passages in the SQuAD reading comprehension dataset (Rajpurkar et al., 2016) and show an example in Figure 4. Since there are no rationales 2121 The game's Media Day, which was typically held on the Tuesday afternoon prior to the game, was moved to the Monday evening and rebranded as super bowl opening night. The event was held on February 1, 2016 at Sap Center in San Jose. Alongside the traditional media availabilities, the event featured an opening ceremony with player introductions on a replica of the golden gate bridge … Q1: What was held on Monday ? A1: game's Media Day Q2: Where ? A2: Sap Center Q3: What was the opening ceremony for ? A3: player introductions Figure 4: Our generated conversation on a SQuAD passage. The questions are generated by our ReDR and the answers are predicted by DrQA. provided in the dataset for generating consecutive questions, we first apply our rule-based rationale selection as introduced in Section 3.1 and then generate a question based on the selected rationale and the conversation history. The answers are predicted by our modified DrQA. Figure 4 shows that our generated questions are closely related to the passage, e.g., the first question contains “Monday” and the third one mentions “opening ceremony”. Moreover, we can also generate interesting questions such as “Where?” which connects to previous questions and makes a coherent conversation. 5 Related Work Question Generation. Generating questions from various kinds of sources, such as texts (Rus et al., 2010; Heilman and Smith, 2010; Mitkov and Ha, 2003; Du et al., 2017), search queries (Zhao et al., 2011), knowledge bases (Serban et al., 2016b) and images (Mostafazadeh et al., 2016), has attracted much attention recently. Our work is most related to previous work on generating questions from sentences or paragraphs. Most early approaches are based on rules and templates (Heilman and Smith, 2010; Mitkov and Ha, 2003), while Du et al. (2017) recently proposed to generate a question by a Sequence-to-Sequence neural network model (Sutskever et al., 2014) with attention (Luong et al., 2015). Other approaches such as (Zhou et al., 2017; Subramanian et al., 2017) take into account the answer information in addition to the given sentence or paragraph. (Du and Cardie, 2018; Song et al., 2018) further modeled the surrounding paragraph-level information of the given sentence. However, most of the work focused on generating standalone questions solely based on a sentence or a paragraph. In contrast, this work explores conversational question generation and has to additionally consider the conversation history in order to generate a coherent question, making the task much more challenging. Conversation Generation. Building chatbots and conversational agents has been pursued by many previous work (Ritter et al., 2011; Vinyals and Le, 2015; Sordoni et al., 2015; Serban et al., 2016a; Li et al., 2016a,b). Vinyals and Le (2015) used a Sequence-to-Sequence neural network (Sutskever et al., 2014) for generating a response given the dialog history. Li et al. (2016a) further optimized the response diversity by maximizing the mutual information between inputs and output responses. Different from these work where the response can be in any form (usually a declarative statement) and is generated solely based on the dialog history, our task is potentially more challenging as it additionally restricts the generated response to be a follow-up question about a given passage. Conversational Question Answering (CQA). CQA aims to automatically answer a sequence of questions. It has been studied in the knowledge base setting (Saha et al., 2018; Iyyer et al., 2017) and is often framed as a semantic parsing problem. Recently released large-scale datasets (Reddy et al., 2018; Choi et al., 2018) enabled studying it in the textual setting where the information source used to answer questions is a given passage, and they inspired many significant work (Zhu et al., 2018; Huang et al., 2018; Yatskar, 2018). However, collecting such datasets has heavily relied on human efforts and can be very costly. Based on one of the most popular datasets CoQA (Reddy et al., 2018), we examine the possibility of automatically generating conversational questions, which can potentially reduce the data collection cost for CQA. 6 Conclusion In this paper, we introduce the task of Conversational Question Generation (CQG), and propose a novel framework which achieves promising performance on the popular dataset CoQA. We in2122 corporate a dynamic reasoning procedure to the general encoder-decoder model and dynamically update the encoding representations of the inputs. Moreover, we use the quality of the answers predicted by a QA model as rewards and fine-tune our model via reinforcement learning. In the future, we would like to explore how to better select the rationale for each question. Besides, it would also be interesting to consider using linguistic knowledge such as named entities or part-of-speech tags to improve the coherence of the conversation. 7 Acknowledgments This research was sponsored in part by the Army Research Office under grant W911NF-17-1-0412, NSF Grant IIS-1815674, the National Nature Science Foundation of China (grant No. 61751307), and Ohio Supercomputer Center (Center, 1987). The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. References Ohio Supercomputer Center. 1987. Ohio supercomputer center. Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentencelevel bleu. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 362–367. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1870–1879. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1907–1917. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1342–1352. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 1631–1640. Daya Guo, Yibo Sun, Duyu Tang, Nan Duan, Jian Yin, Hong Chi, James Cao, Peng Chen, and Ming Zhou. 2018. Question generation from sql queries improves neural semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1597–1607. Michael Heilman and Noah A Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617. Association for Computational Linguistics. Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2018. Flowqa: Grasping flow in history for conversational machine comprehension. arXiv preprint arXiv:1810.06683. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1821–1831. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016b. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192– 1202. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. 2123 Ruslan Mitkov and Le An Ha. 2003. Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing-Volume 2, pages 17–22. Association for Computational Linguistics. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1802– 1813. Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. Memen: Multi-layer embedding with memory networks for machine comprehension. arXiv preprint arXiv:1707.09098. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2383– 2392. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the conference on empirical methods in natural language processing, pages 583–593. Association for Computational Linguistics. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Christian Moldovan. 2010. The first question generation shared task evaluation challenge. In Proceedings of the 6th International Natural Language Generation Conference. Amrita Saha, Vardaan Pahuja, Mitesh M Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph. In ThirtySecond AAAI Conference on Artificial Intelligence. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. ICLR. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016a. Building end-to-end dialogue systems using generative hierarchical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence. Iulian Vlad Serban, Alberto Garc´ıa-Dur´an, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016b. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 588–598. Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, and Daniel Gildea. 2018. Leveraging context information for natural question generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 569–574. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196–205. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Sandeep Subramanian, Tong Wang, Xingdi Yuan, Saizheng Zhang, Yoshua Bengio, and Adam Trischler. 2017. Neural models for key phrase detection and question generation. arXiv preprint arXiv:1706.04560. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 438–449. 2124 Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. ICLR. Zhen Xu, Bingquan Liu, Baoxun Wang, SUN Chengjie, Xiaolong Wang, Zhuoran Wang, and Chao Qi. 2017. Neural response generation via gan with an approximate embedding layer. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 617–626. Mark Yatskar. 2018. A qualitative comparison of coqa, squad 2.0 and quac. arXiv preprint arXiv:1809.10735. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Systems, pages 1815–1825. Shiqi Zhao, Haifeng Wang, Chao Li, Ting Liu, and Yi Guan. 2011. Automatically generating questions from queries for community-based question answering. In Proceedings of 5th international joint conference on natural language processing, pages 929– 937. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662–671. Springer. Chenguang Zhu, Michael Zeng, and Xuedong Huang. 2018. Sdnet: Contextualized attention-based deep network for conversational question answering. arXiv preprint arXiv:1812.03593.
2019
203
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2125–2131 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2125 TALKSUMM: A Dataset and Scalable Annotation Method for Scientific Paper Summarization Based on Conference Talks Guy Lev∗, Michal Shmueli-Scheuer∗, Jonathan Herzig, Achiya Jerbi, David Konopnicki IBM Research, Haifa, Israel {guylev,shmueli,hjon,davidko}@il.ibm.com, [email protected] Abstract Currently, no large-scale training data is available for the task of scientific paper summarization. In this paper, we propose a novel method that automatically generates summaries for scientific papers, by utilizing videos of talks at scientific conferences. We hypothesize that such talks constitute a coherent and concise description of the papers’ content, and can form the basis for good summaries. We collected 1716 papers and their corresponding videos, and created a dataset of paper summaries. A model trained on this dataset achieves similar performance as models trained on a dataset of summaries created manually. In addition, we validated the quality of our summaries by human experts. 1 Introduction The rate of publications of scientific papers is increasing and it is almost impossible for researchers to keep up with relevant research. Automatic text summarization could help mitigate this problem. In general, there are two common approaches to summarizing scientific papers: citations-based, based on a set of citation sentences (Nakov et al., 2004; Abu-Jbara and Radev, 2011; Yasunaga et al., 2019), and content-based, based on the paper itself (Collins et al., 2017; Nikola Nikolov and Hahnloser, 2018). Automatic summarization is studied exhaustively for the news domain (Cheng and Lapata, 2016; See et al., 2017), while summarization of scientific papers is less studied, mainly due to the lack of largescale training data. The papers’ length and complexity require substantial summarization effort from experts. Several methods were suggested to reduce these efforts (Yasunaga et al., 2019; Collins et al., 2017), still they are not scalable as they require human annotations. ∗The authors contributed equally. Title: Split and Rephrase: Better Evaluation and Stronger Baselines (Aharoni and Goldberg, 2018) Paper: Processing long, complex sentences is challenging. This is true either for humans in various circumstances or in NLP tasks like parsing and machine translation . An automatic system capable of breaking a complex sentence into several simple sentences that convey the same meaning is very appealing . A recent work by Narayan et al. (2017) introduced a dataset, evaluation method and baseline systems for the task, naming it Split-and Rephrase . The dataset includes 1,066,115 instances mapping a single complex sentence to a sequence of sentences that express the same meaning, together with RDF triples that describe their semantics. They considered two . .. Indeed, feeding the model with examples containing entities alone without any facts about them causes it to output perfectly phrased but unsupported facts (Table 3). Digging further, we find that 99% of the simple sentences (more than 89% of the unique ones) in the validation and test sets also appear in the training set, which coupled with the good memorization capabilities of SEQ2SEQ models and the relatively small number of distinct simple sentences helps to explain the high BLEU score . To aid further research on the task, we propose a more challenging split of the data . We also establish a stronger baseline by extending the SEQ2SEQ approach with a copy mechanism, which was shown ... We encourage future work on the split-and-rephrase task to use our new data split or the v1.0 split instead of the original one. Talk transcript: let’s begin with the motivation so processing long complex sentences is a hard task this is true for arguments like children people with reading disabilities second language learners but this is also true for sentence level and NLP systems , for example previous work show that dependency parsers degrade performance when they’re introduced with longer and longer sentences, in a similar result was shown for neural machine translation , where neural machine translation systems introduced with longer sentences starting degrading performance, the question rising here is can we automatically break a complex sentence into several simple ones while preserving the meaning or the semantics and this can be a useful component in NLP pipelines . For example, the split and rephrase task was introduced in the last EMNLP by Narayan, Gardent and Shimarina, where they introduced a dataset, an evaluation method and baseline models for this task. The task definition can be taking a complex sentence and breaking it into several simple ones with the same meaning . For example, ...semantics units in the source sentence and then rephrasing those units into a single sentences on the target site. In this work we first show the simple neural models seem to perform very well on the original benchmark, but this is only due to memorization of the training set , we propose a more challenging data split for the task to discourage this memorization and we perform automatic evaluation in error analysis on the new benchmark showing that the task is still very far from being solved. Table 1: Alignment example between a paper’s Introduction section and first 2:40 minutes of the talk’s transcript. The different colors show corresponding content between the transcript to the written paper. Recently, academic conferences started publishing videos of talks (e.g., ACL1, EMNLP1, ICML2, and more). In such talks, the presenter (usually a co-author) must describe their paper coherently and concisely (since there is a time limit), providing a good basis for generating summaries. Based on this idea, in this paper, we propose a new method, named TALKSUMM (acronym for Talkbased Summarization), to automatically generate extractive content-based summaries for scientific papers based on video talks. Our approach utilizes the transcripts of video content of conference talks, and treat them as spoken summaries of papers. Then, using unsupervised alignment algorithms, we map the transcripts to the corresponding papers’ text, and create extractive summaries. Table 1 gives an example of an alignment between 1vimeo.com/aclweb 2icml.cc/Conferences/2017/Videos 2126 a paper and its talk transcript (see Table 3 in the appendix for a complete example). Summaries generated with our approach can then be used to train more complex and datademanding summarization models. Although our summaries may be noisy (as they are created automatically from transcripts), our dataset can easily grow in size as more conference videos are aggregated. Moreover, our approach can generate summaries of various lengths. Our main contributions are as follows: (1) we propose a new approach to automatically generate summaries for scientific papers based on video talks; (2) we create a new dataset, that contains 1716 summaries for papers from several computer science conferences, that can be used as training data; (3) we show both automatic and human evaluations for our approach. We make our dataset and related code publicly available3. To our knowledge, this is the first approach to automatically create extractive summaries for scientific papers by utilizing the videos of conference talks. 2 Related Work Several works focused on generating training data for scientific paper summarization (Yasunaga et al., 2019; Jaidka et al., 2018; Collins et al., 2017; Cohan and Goharian, 2018). Most prominently, the CL-SciSumm shared tasks (Jaidka et al., 2016, 2018) provide a total of 40 human generated summaries; there, a citations-based approach is used, where experts first read citation sentences (citances) that reference the paper being summarized, and then read the whole paper. Then, they create a summary of 150 words on average. Recently, to mitigate annotation cost, Yasunaga et al. (2019) proposed a method, in which human annotators only read the abstract in addition to citances (not reading the full paper). Using this approach, they generated 1000 summaries, costing 600+ person-hours. Conversely, we generate summaries, given transcripts of conference talks, in a fully automatic manner, and, thus, our approach is much more scalable. Collins et al. (2017) also aimed at generating labeled data for scientific paper summarization, based on “highlight statements” that authors can provide in some publication venues. Using external data to create summaries was also proposed in the news domain. Wei and Gao 3https://github.com/levguy/talksumm (2014, 2015) utilized tweets to decide which sentences to extract from news article. Finally, alignment between different modalities (e.g., presentation, videos) and text was studied in different domains. Both Kan (2007) and Bahrani and Kan (2013) studied the problem of document to presentation alignment for scholarly documents. Kan (2007) focused on the the discovery and crawling of document-presentation pairs, and a model to align between documents to corresponding presentations. In Bahrani and Kan (2013) they extended previous model to include also visual components of the slides. Aligning video and text was studied mainly in the setting of enriching videos with textual information (Bojanowski et al., 2015; Malmaud et al., 2015; Zhu et al., 2015). Malmaud et al. (2015) used HMM to align ASR transcripts of cooking videos and recipes text for enriching videos with instructions. Zhu et al. (2015) utilized books to enrich videos with descriptive explanations. Bojanowski et al. (2015) proposed to align video and text by providing a time stamp for every sentence. The main difference between these works and ours is in the alignment being used to generate textual training data in our case, rather than to enrich videos. 3 The TALKSUMM Dataset 3.1 Data Collection Recently, many computer science academic associations including ACL, ACM, IMLS and more, have started recording talks in different conferences, e.g., ACL, NAACL, EMNLP, and other colocated workshops. A similar trend occurs in other domains such as Physics4, Biology5, etc. In a conference, each speaker (usually a coauthor) presents their paper given a timeframe of 15-20 minutes. Thus, the talk must be coherent and concentrate on the most important aspects of a paper. Hence, the talk can be considered as a summary of the paper, as viewed by its authors, and is much more comprehensive than the abstract, which is written by the authors as well. In this work, we focused on NLP and ML conferences, and analyzed 1716 video talks from ACL, NAACL, EMNLP, SIGDIAL (2015-2018), and ICML (2017-2018). We downloaded the videos and extracted the speech data. Then, via 4www.cleoconference.org 5igem.org/Videos/Lecture_Videos 2127 a publicly available ASR service6, we extracted transcripts of the speech, and based on the video metadata (e.g., title), we retrieved the corresponding paper (in PDF format). We used ScienceParse7 to extract the text of the paper, and applied a simple processing in order to filter-out some noise (e.g. lines starting with the word “Copyright”). At the end of this process, the text of each paper is associated with the transcript of the corresponding talk. 3.2 Dataset Generation The transcript itself cannot serve as a good summary for the corresponding paper, as it constitutes only one modality of the talk (which also consists of slides, for example), and hence cannot stand by itself and form a coherent written text. Thus, to create an extractive paper summary based on the transcript, we model the alignment between spoken words and sentences in the paper, assuming the following generative process: During the talk, the speaker generates words for describing verbally sentences from the paper, one word at each time step. Thus, at each time step, the speaker has a single sentence from the paper in mind, and produces a word that constitutes a part of its verbal description. Then, at the next time-step, the speaker either stays with the same sentence, or moves on to describing another sentence, and so on. Thus, given the transcript, we aim to retrieve those “source” sentences and use them as the summary. The number of words uttered to describe each sentence can serve as importance score, indicating the amount of time the speaker spent describing the sentence. This enables to control the summary length by considering only the most important sentences up to some threshold. We use an HMM to model the assumed generative process. The sequence of spoken words is the output sequence. Each hidden state of the HMM corresponds to a single paper sentence. We heuristically define the HMM’s probabilities as follows. Denote by Y (1 : T) the spoken words, and by S(t) ∈{1, ..., K} the paper sentence index at time-step t ∈{1, ..., T}. Similarly to Malmaud et al. (2015), we define the emission probabilities 6www.ibm.com/watson/services/ speech-to-text/ 7github.com/allenai/science-parse to be: p(Y (t) = y|S(t) = k) ∝ max w∈words(k) sim(y, w) where words(k) is the set of words in the k’th sentence, and sim is a semantic-similarity measure between words, based on word-vector distance. We use pre-trained GloVe (Pennington et al., 2014) as the semantic vector representations for words. As for the transition probabilities, we must model the speaker’s behavior and the transitions between any two sentences in the paper. This is unlike the simpler setting in Malmaud et al. (2015), where transition is allowed between consecutive sentences only. To do so, denote the entries of the transition matrix by T(k, l) = p(S(t + 1) = l|S(t) = k). We rely on the following assumptions: (1) T(k, k) (the probability of staying in the same sentence at the next time-step) is relatively high. (2) There is an inverse relation between T(k, l) and |l −k|, i.e., it is more probable to move to a nearby sentence than jumping to a farther sentence. (3) S(t + 1) > S(t) is more probable than the opposite (i.e., transition to a later sentence is more probable than to an earlier one). Although these assumptions do not perfectly reflect reality, they are a reasonable approximation in practice. Following these assumptions, we define the HMM’s transition probability matrix. First, define the stay-probability as α = max(δ(1 − K T ), ϵ), where δ, ϵ ∈(0, 1). This choice of stayprobability is inspired by Malmaud et al. (2015), using δ to fit it to our case where transitions between any two sentences are allowed, and ϵ to handle rare cases where K is close to, or even larger than T. Then, for each sentence index k ∈{1, ..., K}, we define: T(k, k) = α T(k, k + j) = βk · λj−1, j ≥1 T(k, k −j) = γ · βk · λj−1, j ≥1 where λ, γ, βk ∈(0, 1), λ and γ are factors reflecting assumptions (2) and (3) respectively, and for all k, βk is normalized s.t. PK l=1 T(k, l) = 1. The values of λ, γ, δ and ϵ were fixed throughout our experiments at λ = 0.75, γ = 0.5, δ = 0.33 and ϵ = 0.1. The average value of α, across all papers, was around 0.3. The values of 2128 these parameters were determined based on evaluation over manually-labeled alignments between the transcripts and the sentences of a small set of papers. Finally, we define the start-probabilities assuming that the first spoken word must be conditioned on a sentence from the Introduction section, hence p(S(1)) is defined as a uniform distribution over the Introduction section’s sentences. Note that sentences which appear in the Abstract, Related Work, and Acknowledgments sections of each paper are excluded from the HMM’s hidden states, as we observed that presenters seldom refer to them. To estimate the MAP sequence of sentences, we apply the Viterbi algorithm. The sentences in the obtained sequence are the candidates for the paper’s summary. For each sentence s appearing in this sequence, denote by count(s) the number of time-steps in which this sentence appears. Thus, count(s) models the number of words generated by the speaker conditioned on s, and, hence, can be used as an importance score. Given a desired summary length, one can draw a subset of topranked sentences up to this length. 4 Experiments 4.1 Experimental Setup Data For Evaluation We evaluate the quality of our dataset generation method by training an extractive summarization model, and evaluating this model on a human-generated dataset of scientific paper summaries. For this, we choose the CL-SciSumm shared task (Jaidka et al., 2016, 2018), as this is the most established benchmark for scientific paper summarization. In this dataset, experts wrote summaries of 150 words length on average, after reading the whole paper. The evaluation is on the same test data used by Yasunaga et al. (2019), namely 10 examples from CL-SciSumm 2016, and 20 examples from CLSciSumm 2018 as validation data. Training Data Using the HMM importance scores, we create four training sets, two with fixed-length summaries (150 and 250 words), and two with fixed ratio between summary and paper lengths (0.3 and 0.4). We train models on each training set, and select the model yielding the best performance on the validation set (evaluation is always done with generating a 150-words sumModel 2-R 2-F 3-F SU4-F TALKSUMM-HYBRID 35.05 34.11 27.19 24.13 TALKSUMM-ONLY 22.77 21.94 15.94 12.55 GCN HYBRID 2* 32.44 30.08 23.43 23.77 GCN CITED TEXT SPANS* 25.16 24.26 18.79 17.67 ABSTRACT* 29.52 29.4 23.16 23.34 Table 2: ROUGE scores on the CL-SciSumm 2016 test benchmark. *: results from Yasunaga et al. (2019). mary). Summarization Model We train an extractive summarization model on our TALKSUMM dataset, using the extractive variant of Chen and Bansal (2018). We test two summary generation approaches, similarly to Yasunaga et al. (2019). First, for TALKSUMM-ONLY, we generate a 150words summary out of the top-ranked sentences extracted by our trained model (sentences from the Acknowledgments section are omitted, in case the model extracts any). In the second approach, a 150-words summary is created by augmenting the abstract with non-redundant sentences extracted by our model, similarly to the “Hybrid 2” approach of Yasunaga et al. (2019). We perform early-stopping and hyper-parameters tuning using the validation set. Baselines We compare our results to SCISUMMNET (Yasunaga et al., 2019) trained on 1000 scientific papers summarized by human annotators. As we use the same test set as in Yasunaga et al. (2019), we directly compare their reported model performance to ours, including their ABSTRACT baseline which takes the abstract to be the paper’s summary. 4.2 Results Automatic Evaluation Table 2 summarizes the results: both GCN CITED TEXT SPANS and TALKSUMM-ONLY models, are not able to obtain better performance than ABSTRACT8. However, for the Hybrid approach, where the abstract is augmented with sentences from the summaries emitted by the models, our TALKSUMM-HYBRID outperforms both GCN HYBRID 2 and ABSTRACT. Importantly, our model, trained on automaticallygenerated summaries, performs on par with models trained over SCISUMMNET, in which training data was created manually. 8While the abstract was input to GCN CITED TEXT SPANS, it was excluded from TALKSUMM-ONLY. 2129 Human Evaluation We conduct a human evaluation of our approach with support from authors who presented their papers in conferences. As our goal is to test more comprehensive summaries, we generated summaries composed of 30 sentences (approximately 15% of a long paper). We randomly selected 15 presenters from our corpus and asked them to perform two tasks, given the generated summary of their paper: (1) for each sentence in the summary, we asked them to indicate whether they considered it when preparing the talk (yes/no question); (2) we asked them to globally evaluate the quality of the summary (1-5 scale, ranging from very bad to excellent, 3 means good). For the sentence-level task (1), 73% of the sentences were considered while preparing the talk. As for the global task (2), the quality of the summaries was 3.73 on average, with standard deviation of 0.725. These results validate the quality of our generation method. 5 Conclusion We propose a novel automatic method to generate training data for scientific papers summarization, based on conference talks given by authors. We show that the a model trained on our dataset achieves competitive results compared to models trained on human generated summaries, and that the dataset quality satisfies human experts. In the future, we plan to study the effect of other video modalities on the alignment algorithm. We hope our method and dataset will unlock new opportunities for scientific paper summarization. References Amjad Abu-Jbara and Dragomir Radev. 2011. Coherent citation-based summarization of scientific papers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 500–509. Roee Aharoni and Yoav Goldberg. 2018. Split and rephrase: Better evaluation and stronger baselines. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 719–724. Association for Computational Linguistics. Bamdad Bahrani and Min-Yen Kan. 2013. Multimodal alignment of scholarly documents and their presentations. In Proceedings of the 13th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL ’13, pages 281–284. Piotr Bojanowski, Remi Lajugie, Edouard Grave, Francis Bach, Ivan Laptev, Jean Ponce, and Cordelia Schmid. 2015. Weakly-supervised alignment of video with text. In The IEEE International Conference on Computer Vision (ICCV). Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Association for Computational Linguistics. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494. Arman Cohan and Nazli Goharian. 2018. Scientific document summarization via citation contextualization and scientific discourse. International Journal on Digital Libraries, pages 287–303. Ed Collins, Isabelle Augenstein, and Sebastian Riedel. 2017. A supervised approach to extractive summarisation of scientific papers. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 195–205. Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2016. Overview of the cl-scisumm 2016 shared task. In In Proceedings of Joint Workshop on Bibliometric-enhanced Information Retrieval and NLP for Digital Libraries (BIRNDL 2016). Kokil Jaidka, Michihiro Yasunaga, Muthu Kumar Chandrasekaran, Dragomir Radev, and Min-Yen Kan. 2018. The cl-scisumm shared task 2018: Results and key insights. In Proceedings of the 3rd Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL). Min-Yen Kan. 2007. Slideseer: A digital library of aligned document and presentation pairs. In Proceedings of the 7th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL ’07, pages 81–90. Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nicholas Johnston, Andrew Rabinovich, and Kevin Murphy. 2015. What’s cookin’? interpreting cooking videos using text, speech and vision. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 143–152. Association for Computational Linguistics. Preslav I. Nakov, Ariel S. Schwartz, and Marti A. Hearst. 2004. Citances: Citation sentences for semantic analysis of bioscience text. In In Proceedings of the SIGIR?04 workshop on Search and Discovery in Bioinformatics. 2130 Michael Pfeiffer Nikola Nikolov and Richard Hahnloser. 2018. Data-driven summarization of scientific articles. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In In EMNLP. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083. Zhongyu Wei and Wei Gao. 2014. Utilizing microblogs for automatic news highlights extraction. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 872–883. Dublin City University and Association for Computational Linguistics. Zhongyu Wei and Wei Gao. 2015. Gibberish, assistant, or master?: Using tweets linking to news for extractive single-document summarization. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’15, pages 1003–1006. Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander Fabbri, Irene Li, Dan Friedman, and Dragomir Radev. 2019. Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks. In Proceedings of AAAI 2019. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In The IEEE International Conference on Computer Vision (ICCV). A A Detailed Example This section elaborates on the example presented in Table 1. Table 3 extends Table 1 by showing the manually-labeled alignment between the complete text of the paper’s Introduction section, and the corresponding transcript. Table 4 shows the alignment obtained using the HMM. Each row in this table corresponds to an interval of consecutive time-steps (i.e., a sub-sequence of the transcript) in which the same paper sentence was selected by the Viterbi algorithm. The first column (Paper Sentence) shows the selected sentences; The second column (ASR transcript) shows the transcript obtained by the ASR system; The third column (Human transcript) shows the manually corrected transcript, which is provided for readability Title: Split and Rephrase: Better Evaluation and Stronger Baselines (Aharoni and Goldberg, 2018) Paper: Processing long, complex sentences is challenging. This is true either for humans in various circumstances or in NLP tasks like parsing and machine translation . An automatic system capable of breaking a complex sentence into several simple sentences that convey the same meaning is very appealing . A recent work by Narayan et al. (2017) introduced a dataset, evaluation method and baseline systems for the task, naming it Split-and Rephrase . The dataset includes 1,066,115 instances mapping a single complex sentence to a sequence of sentences that express the same meaning, together with RDF triples that describe their semantics. They considered two system setups: a text-to-text setup that does not use the accompanying RDF information, and a semantics-augmented setup that does. They report a BLEU score of 48.9 for their best text-to-text system, and of 78.7 for the best RDF-aware one. We focus on the text-to-text setup, which we find to be more challenging and more natural. We begin with vanilla SEQ2SEQ models with attention (Bahdanau et al., 2015) and reach an accuracy of 77.5 BLEU, substantially outperforming the text-to-text baseline of Narayan et al. (2017) and approaching their best RDF-aware method. However, manual inspection reveal many cases of unwanted behaviors in the resulting outputs: (1) many resulting sentences are unsupported by the input: they contain correct facts about relevant entities, but these facts were not mentioned in the input sentence; (2) some facts are repeated the same fact is mentioned in multiple output sentences; and (3) some facts are missing mentioned in the input but omitted in the output. The model learned to memorize entity-fact pairs instead of learning to split and rephrase. Indeed, feeding the model with examples containing entities alone without any facts about them causes it to output perfectly phrased but unsupported facts (Table 3). Digging further, we find that 99% of the simple sentences (more than 89% of the unique ones) in the validation and test sets also appear in the training set, which coupled with the good memorization capabilities of SEQ2SEQ models and the relatively small number of distinct simple sentences helps to explain the high BLEU score . To aid further research on the task, we propose a more challenging split of the data . We also establish a stronger baseline by extending the SEQ2SEQ approach with a copy mechanism, which was shown to be helpful in similar tasks (Gu et al., 2016; Merity et al., 2017; See et al., 2017). On the original split, our models outperform the best baseline of Narayan et al. (2017) by up to 8.68 BLEU, without using the RDF triples. On the new split, the vanilla SEQ2SEQ models break completely, while the copy-augmented models perform better. In parallel to our work, an updated version of the dataset was released (v1.0), which is larger and features a train/test split protocol which is similar to our proposal. We report results on this dataset as well. The code and data to reproduce our results are available on Github.1 We encourage future work on the split-and-rephrase task to use our new data split or the v1.0 split instead of the original one. Talk Transcript: Let’s begin with the motivation so processing long complex sentences is a hard task this is true for arguments like children people with reading disabilities second language learners but this is also true for sentence level and NLP systems for example previous work show that dependency parsers degrade performance when they’re introduced with longer and longer sentences in a similar result was shown for neural machine translation where neural machine translation systems introduced with longer sentences starting degrading performance the question rising here is can we automatically break a complex sentence into several simple ones while preserving the meaning or the semantics and this can be a useful component in NLP pipelines . For example the split and rephrase task was introduced in the last EMNLP by Narayan Gardent and Shimarina where they introduced a dataset an evaluation method and baseline models for this task. The task definition can be taking a complex sentence and breaking it into several simple ones with the same meaning . For example if you take the sentence Alan being joined NASA in nineteen sixty three where he became a member of the Apollo twelve mission along with Alfa Worden and his back a pilot and they’ve just got its commander who would like to break the sentence into four sentences which can go as Alan bean serves as a crew member of Apolo twelve Alfa Worden was the back pilot will close it was commanded by David Scott now be was selected by NASA in nineteen sixty three we can see that the task requires first identifying independence semantics units in the source sentence and then rephrasing those units into a single sentences on the target site. In this work we first show the simple neural models seem to perform very well on the original benchmark but this is only due to memorization of the training set we propose a more challenging data split for the task to discourage this memorization and we perform automatic evaluation in error analysis on the new benchmark showing that the task is still very far from being solved. Table 3: Alignment example between a paper’s Introduction section and first 2:40 minutes of the talk’s transcript. The different colors show corresponding content between the transcript to the written paper. This is the full-text version of the example shown in Table 1. (our model predicted the alignment based on the raw ASR output); Finally, the forth column shows whether our model has correctly aligned a paper sentence with a sub-sequence of the transcript. Rows with no values in this column correspond to transcript sub-sequences which were not associated with any paper sentence in the manuallylabeled alignment. 2131 Paper Sentence ASR transcript Human transcript Processing long, complex sentences is challenging. base begin motivation processing long complex sentences hard task Let’s begin with the motivation so processing long complex sentences is a hard task ✓ This is true either for humans in various circumstances or in NLP tasks like parsing and machine translation. true arguments like children people reading disabilities second language learners also true first sentence level p system this is true for arguments like children people with reading disabilities second language learners but this is also true for sentence level and NLP systems ✓ A recent work by Narayan et al. (2017) introduced a dataset, evaluation method and baseline systems for the task, naming it Split-and Rephrase. previous work show data tendency parsers great performance introduced longer longer sentences previous work show that dependency parsers degrade performance when they’re introduced with longer and longer sentences  This is true either for humans in various circumstances or in NLP tasks like parsing and machine translation. similar results showing new machine translation new machine translation similar result was shown for neural machine translation where neural machine translation ✓ An automatic system capable of breaking a complex sentence into several simple sentences that convey the same meaning is very appealing. systems introduced longer sentences starting performance question rising automatically break complex sentence several simple ones preserving meaning semantics useful company p like example systems introduced with longer sentences starting degrading performance the question rising here is can we automatically break a complex sentence into several simple ones while preserving the meaning or the semantics and this can be a useful component in NLP pipelines for example ✓ A recent work by Narayan et al. (2017) introduced a dataset, evaluation method and baseline systems for the task, naming it Split-and Rephrase. leader task introduced last ’ll bynari guard going marina introduced data sets evaluation method baseline models task the split and rephrase task was introduced in the last EMNLP by Narayan Gardent and Shimarina where they introduced a dataset an evaluation method and baseline models for this task ✓ An automatic system capable of breaking a complex sentence into several simple sentences that convey the same meaning is very appealing. phoenician taking complex sentences break several simple ones example take sentence alan joined nasa nineteen sixty three became member apollo twelve mission along word inspect pilot got commander would like break sentence sentences go alan serves crew member twelve word better polls commanded david scott selected nasa nineteen sixty three the task definition can be taking a complex sentence and break it into several simple ones for example if you take the sentence Alan being joined NASA in nineteen sixty three where he became a member of the Apollo twelve mission along with Alfa Worden and his back a pilot and they’ve just got its commander who would like to break the sentence into four sentences which can go as Alan bean serves as a crew member of Apolo twelve Alfa Worden was the back pilot will close it was commanded by David Scott now be was selected by NASA in nineteen sixty three A recent work by Narayan et al. (2017) introduced a dataset, evaluation method and baseline systems for the task, naming it Split-and Rephrase. see task requires first identifying independence imagic units we can see that the task requires first identifying independence semantics units The dataset includes 1,066,115 instances mapping a single complex sentence to a sequence of sentences that express the same meaning, together with RDF triples that describe their semantics. source sentence rephrasing units single sentences target in the source sentence and then rephrasing those units into a single sentences on the target site Digging further, we find that 99% of the simple sentences (more than 89% of the unique ones) in the validation and test sets also appear in the training set, which coupled with the good memorization capabilities of SEQ2SEQ models and the relatively small number of distinct simple sentences helps to explain the high BLEU score. work first show simple neural models seem perform well original benchmark due memorization training set In this work we first show the simple neural models seem to perform very well on the original benchmark but this is only due to memorization of the training set ✓ To aid further research on the task, we propose a more challenging split of the data. perform close challenging data split task discourage instant memorization perform automatic evaluation analysis new benchmark showing task still far we propose a more challenging data split for the task to discourage this memorization and we perform automatic evaluation in error analysis on the new benchmark showing that the task is still very far from being solved ✓ Table 4: Alignment obtained using the HMM, for the Introduction section and first 2:40 minutes of the video’s transcript.
2019
204
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2132–2141 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2132 Improving Abstractive Document Summarization with Salient Information Modeling Yongjian You1,2, Weijia Jia2, 1*, Tianyi Liu1,2, Wenmian Yang1,2 1Department of Computer Science and Engineering, Shanghai Jiao Tong University 2State Key Lab of IoT for Smart City, CIS, University of Macau, Macao, SAR China {youyongjian, jiawj, liutianyi, sdq11111}@sjtu.edu.cn Abstract Comprehensive document encoding and salient information selection are two major difficulties for generating summaries with adequate salient information. To tackle the above difficulties, we propose a Transformerbased encoder-decoder framework with two novel extensions for abstractive document summarization. Specifically, (1) to encode the documents comprehensively, we design a focus-attention mechanism and incorporate it into the encoder. This mechanism models a Gaussian focal bias on attention scores to enhance the perception of local context, which contributes to producing salient and informative summaries. (2) To distinguish salient information precisely, we design an independent saliency-selection network which manages the information flow from encoder to decoder. This network effectively reduces the influences of secondary information on the generated summaries. Experimental results on the popular CNN/Daily Mail benchmark demonstrate that our model outperforms other state-of-the-art baselines on the ROUGE metrics. 1 Introduction Document summarization is a fundamental task of natural language generation which condenses the given documents and generates fluent summaries with salient information automatically. Recent successes of neural sequence-tosequence (seq2seq) models (Luong et al., 2015; Wu et al., 2016; Tu et al., 2016) enable the endto-end framework for natural language generation, which inspires the research on abstractive summarization. Abstractive document summarization employs an end-to-end language model to encode a document into high-dimensional representations and then decode the representations into an abstractive summary. Though promisDocuments: a [duke student] has [admitted to hanging a noose made of rope] from a tree near a student union , [university officials] said thursday . the prestigious private school did n’t identify the student , citing federal privacy laws . in a news release , it said the student was [no longer] on campus and [will face] student conduct [review] . the [student was identified during an investigation] by campus police and the office of student affairs and admitted to placing the noose on the tree early wednesday , the university said . ... at a forum held on the steps of duke chapel , close to where [the noose was discovered at 2 a.m]. , hundreds of people gathered . “ you came here for the reason that you want to say with me , ‘ this is no duke we will accept . ... Reference summary: student is no longer on duke university campus and will face disciplinary review . school officials identified student during investigation and the person admitted to hanging the noose , duke says . the noose , made of rope , was discovered on campus about 2 a.m. Table 1: Example of a document and its corresponding reference summary. We consider the reference summary contains all salient information and mark the words or phrases appearing in the document in [red]. ing improvements have been achieved recently (Li et al., 2018c; Kry´sci´nski et al., 2018), there are still many problems are not studied well, such as the incompetence of salient information modeling. Modeling salient information contains the procedure of information representation and discrimination. Generally, the most essential prerequisite for a practical document summarization model is that the generated summaries should contain adequate salient information of the original documents. However, previous seq2seq models are still incapable of achieving convincing performance, which are restricted by the following two difficulties. The first difficulty lies in the procedure of encoding. Considering a document is a long sequence of multiple sentences, the semantics of 2133 each token in document contain the dependencies with other distant tokens and its local context information. They both contribute to producing high-quality summaries with adequate salient information. The lack of long-term dependencies among tokens often leads to generating incomplete summaries (Li et al., 2018c). Unfortunately, traditional seq2seq encoders (recurrent or convolution based) are deficient in modeling dependencies among distant segments (Bengio et al., 1994; Li et al., 2018c). In recent years, the Transformer model (Vaswani et al., 2017) reveals remarkable performance in many similar tasks (Devlin et al., 2018) due to exploiting long-term dependencies, but recent studies point out this model may overlook local context occasionally (Yang et al., 2018). The absence of local context information accounts for inadequate details of salient information. Therefore, it is challenging to encode global information and local context comprehensively for each token in documents, which requires the capability of capturing long-term dependencies and local semantics at the same time. The second difficulty is to distinguish salient information from long documents precisely. In the example shown in Table 1, salient segments account for only a small part of the whole document, which is laborious for naive seq2seq models to distinguish important information from much secondary information. The summaries generated by these models usually lose salient information of original documents or even contain repetitions (Li et al., 2018c). In this paper, we propose the Extended Transformer model for Abstractive Document Summarization (ETADS) to tackle the above issues. Specifically, we design a novel focusattention mechanism and saliency-selection network equipped in the encoder and decoder respectively: (1) To comprehensively encode the documents, we design a focus-attention mechanism, where a learnable Gaussian focal bias is employed as a regularization term on attention scores. This focal bias implicitly aggregates attention on local continuous scopes to emphasize the corresponding part of document. (2) To distinguish salient information in documents, we design an independent saliency-selection network to manage the information flow from encoder to decoder explicitly. The saliency-selection network employs a gate mechanism to assign a salient score for each token in source documents according to their encoded representations. We consider the lower-score tokens are relatively insignificant and reduce their likelihood of appearing in final summaries. Finally, we conduct extensive experiments on the CNN/Daily Mail dataset which is prevailing and widely used for document summarization task. The experimental results show that ETADS achieves stateof-the-art ROUGE scores and outperforms many strong baselines. 2 Related Work With the development of seq2seq model on neural translation task, more and more researchers take note of its great potential in text summarization area (Fan et al., 2017; Ling and Rush, 2017; Cheng and Lapata, 2016), especially for abstractive methods. Rush et al. (2015) is the first to apply seq2seq model with attention mechanism to abstractive summarization and achieve promising improvement. Nallapati et al. (2016) modify the basic model with RNN-based encoder and decoder and propose several techniques. Chen et al. (2016) further propose to improve the novelty of generated summaries and design a distractionbased attentional model. Li et al. (2017) creatively incorporate the variational auto-encoder into the seq2seq model to learn the latent structure information. However, these models are nearly designed for abstractive sentence summarization, which focus on encoding and mining salient information on sentence-level and lead to unsatisfactory performances for document summarization. Some recent work improves the performance of neural abstractive models on document summarization task from various aspects. To better grasp the essential meaning for summarization, Chen et al. (2016) propose not only to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content. Tan et al. (2017) propose a graph-based attention mechanism in a hierarchical encoder-decoder framework to generate multi-sentence summary. Gehrmann et al. (2018) presents a content selection model for summarization that identifies phrases within a document that are likely included in its summary. To produce more informative summaries, (Gu et al., 2016) is the first to show that the copy mechanism(Vinyals et al., 2015) can alleviate the OutOf-Vocabulary problem by copying words from 2134 the source documents. See et al. (2017) rebuild this pointer-generator network and incorporate an additional coverage mechanism into the decoder. Li et al. (2018b) notice the necessity of explicit information selection and they build a gated global information filter and local sentence selection mechanism. Moreover, reinforcement learning (RL) approaches have been shown to further improve performance on these tasks(Celikyilmaz et al., 2018; Li et al., 2018a). Pasunuru and Bansal (2018) develop a loss-function based on whether salient segments are included in a summary. However, the optimization of RL-based models can be difficult to tune and slow to train. 3 Model In this section, we describe our approach from three aspects: (i) the Transformer-based encoderdecoder framework, (ii) the focus-attention mechanism for the encoder to emphasize the local context, and (iii) the saliency-selection network for the decoder to select salient information. 3.1 Encoder-Decoder Framework Given a document X = (x1, x2, ..., xm), the encoder maps its corresponding symbol representations E = (e1, e2, ..., em) to a sequence of continuous representations Z = (z1, z2, ..., zm), where m is the length of document. The decoder then decode Z into continuous representations S = (s1, s2, ..., sn) and generates abstractive summary Y = (y1, y2, ..., yn) one token a time, where n is the length of summary. Vs and Vt are the source/target vocabularies and xi ∈Vs, yj ∈Vt. E is the sum of word embedding representations and position embedding representations, where ei ∈Rde. Both embedding representations are initialized as (Vaswani et al., 2017) and learned during the process of optimization. 3.1.1 Encoder The encoder is composed of a stack of N identical layers, and each layer has two sub-layers. The first is the self-attention sub-layer and the second is the feed-forward sub-layer. The residual connection is employed around each of the two sublayers, followed by layer normalization. Given the example input t, the output of each sub-layer can be formalized as LayerNorm(t + SubLayer(t)). For encoder, the SubLayer(t) can be replaced with ATT(t) or FFN(t), which represents the preoutput of self-attention sub-layer or feed-forward sub-layer respectively. The details of each sublayer are presented as follows. The self-attention sub-layer takes the output of previous layer as the input. Formally, the input for the self-attention sub-layer of the l-th layer is Zl−1 ∈Rm×dm, where dm is the dimension of output. Specially, Z0 = E and the output of encoder Z = ZN. In the process of computation, three matrices query Ql ∈Rm×dm, key Kl ∈Rm×dm and value Vl ∈Rm×dm are obtained firstly by the linear projections from Zl−1 with three different metrics W Q l ∈Rdm×dm, W K l ∈ Rdm×dm and W V l ∈Rdm×dm. Then the preoutput of self-attention sub-layer can be computed with the scaled dot-product attention mechanism: ATT(Zl−1) = att(Ql, Kl, Vl) = softmax(QlKT l √dm )Vl (1) and the final output Al of this sub-layer is obtained with residual connection and layer normalization. Moreover, the self-attention sub-layer can be further extended into multi-head manner. Namely, ATTM(Zl−1) = concat(H1, ..., Hh)W C l where Hi = att(QlW Q l,i, KlW K l,i , VlW V l,i) (2) where h is the number of heads, W Q l,i ∈Rdm×dh, W K l,i ∈Rdm×dh, W V l,i ∈Rdm×dh and W C l ∈ Rh∗dh×dm are four learnable weight matrices, dh is the dimension for each head, we set dh = dm/h. The feed-forward sub-layer takes the output of self-attention sub-layer Al as the input and the computation of pre-output FFN(Al) is straightforward with a position-wise fully connected feedforward network: FFN(Al) = relu(AlW 1 l + b1 l )W 2 l + b2 l (3) where W 1 l ∈Rdm×df and W 2 l ∈Rdf×dm are two learnable weight matrices, df is the dimension of intermediate output. b1 l ∈Rdf and b2 l ∈Rdm are two learnable biases. The final output of feedforward sub-layer Zl is also the output for the l-th layer which is obtained after residual connection and layer normalization. 3.1.2 Decoder The decoder in our framework has a similar stacked structure with N identical layers. In addition to the two sub-layers introduced above, 2135 the decoder inserts another self-attention sub-layer in between, which performs multi-head attention over the output of the encoder. For clarity, we use the “bridge sub-layer” to refer to this additional self-attention sub-layer and BATT(Z, t) to represent the pre-output of this sub-layer, where Z is the encoder output and t is a example of encoded partial generated summary. The calculation of BATT(Z, t) is similar to the Eq.(1). Specifically, for the l-th bridge sub-layer in the decoder, key Kl and value Vl are obtained by linear projections from Z. Apart from the additional sub-layer, the rest of computation process is the same as the encoder, and the output of last layer HN is considered as the final decoder output H. Finally, for the i-th decoding step, we compute a distribution over the Vt for target elements yi by projecting the output of decoder stack Si via a linear layer with weights W o ∈Rdm×T and bias bo ∈RT , p(yi|y1, ..., yi−1; X) = softmax(W oSi + bo) (4) where T is the size of vocabulary Vt. 3.2 Focus-Attention Mechanism To take full advantage of documents information during the process of encoding, we design a focus-attention mechanism and build it in the selfattention sub-layers of the encoder, which is depicted as Figure 1. The “dotted boxes” indicate that the corresponding modules can be adapted into the multi-head manner. The focus-attention mechanism models a focal bias as a regularization term on attention scores which is determined by the position of center and effective coverage scope. In the l-th self-attention sub-layer, since the query Ql, key Kl and value Vl are obtained by linear projections from the input Zl−1, so that they contain similar information in different semantic space. To reduce the amount of calculation, we only utilize the query matrices Ql to compute the position vector and coverage scope vector. Specifically, for the i-th encoding step in l-th layer, the center position scalar µi l ∈R and the coverage scope scalar σi l ∈R are calculated by two linear projections, namely: µi l = U T c tanh(WpQi l + WgGl) σi l = U T d tanh(WpQi l + WgGl) (5) where Wp ∈Rdm×dm, and Wg ∈Rdm×dm are two shared weight matrices. Uc ∈Rdm and Ud ∈Rdm Focal Bias Softmax Document Attention Energy Attention Scores Query Key Scaled DotProduct Attention FocusAttention Mechanism Figure 1: The focus-attention mechanism. are two different linear projection weight vectors, m is the length of input document and Gl = 1 m Pm i=1 Qi l is the mean vector to provide complementary information. Furthermore, we regulate µi l and σi l to the closed interval [0, m], ˜µi l = m ∗sigmoid(µi l) ˜σi l = m ∗sigmoid(σi l) (6) According to the definition of Gaussian distribution, the focal bias for the i-th step fi l ∈Rm can be easily obtained with ˜µi l and ˜σi l as follows: fi,j l = −(P j −˜µi l)2 (˜σi l)2/2 (7) where P j is the absolute position of word xj in the document. fi,j l ∈[−∞, 0] measures the distance between word xj and the center position ˜µi l. Eventually, this focal bias is added to the attention energy of encoder layers before softmax normalization. ATT(Zl−1) = att(Ql, Kl, Vl) = softmax(QlKT l √ d ⊕fl)Vl (8) where ⊕denotes the addition. Moreover, we further adapt the focus-attention mechanism into the multi-head manner as Eq.2. Accordingly, the distinct focal biases are assigned for each head and different weight matrices are utilized in the process of computation. 2136 3.3 Saliency-Selection Network Abstractive document summarization is a special NLP generation task which requires to reduce the influence of secondary information and integrate salient segments to produce a condensed summary. Traditional seq2seq models often have limited performance on distinguishing salient segments (Tan et al., 2017), which emphasizes the necessity of customized selection network. In this work, we design the saliency-selection network for information selection, which is depicted as Figure 2. Concretely, we measure the saliency of each word in the document by assigning a saliency score and make a soft selection. For the i-th decoding step in l-th layer, the saliency-selection network takes query matrices Qi l ∈Rd m and key matrices Kl ∈Rm×dm as the input, where m is the length of the input document. Then, the network computes saliency score gi l ∈Rm as: gi,j l = sigmoid((WhQi l)(WsKj l )T ) (9) where Wh ∈Rdm×dm and Ws ∈Rdm×dm are two learnable weight matrices. gi,j l ∈[0, 1] measures the saliency of the j-th token in document for the i-th position in summary. Furthermore, we incorporate the computed saliency score gl into the attention network of bridge sub-layer by: BATT(Z, Sl−1) = att(Ql, Kl, Vl) = gl ⊗softmax(QlKT l √ d )Vl (10) where ⊗denotes element-wise multiplication. Moreover, we also adopt the saliency-selection network into the multi-head manner, which allows to model saliency from different perspectives at different positions. 3.4 Objective Function Our goal is to maximize the output summary probability given the input document. Therefore, we optimize the negative log-likelihood loss function: L = −1 |τ| X (X,Y )∈τ log p(Y |X; θ) (11) Document Partial Summary Scaled DotProduct Attention Saliency Scores Attention Scores Final Scores Key SaliencySelection Network Query Figure 2: The saliency-selection netowrk. where θ is the model parameter, and (X, Y ) is a document-summary pair in training set τ, then log(Y |X; θ) = n X i=1 log p(yi|y1, ..., yi−1, X; θ) (12) where p(yi|y1, ..., yi−1, X; θ) is calculated by the decoder. 4 Experiments In this section, we introduce the experiment setup, the implementation details, the baseline models and the experimental results. 4.1 Setup We conduct the experiments on a large-scale corpus of CNN/Daliy Mail, which has been widely used for the explorations on document summarization. The corpus is originally constructed by collecting human generated highlights for new stories in CNN and Daily Mail website (Hermann et al., 2015). We use the scripts supplied by Nallapati et al. (2016) to further obtain the CNN/Daily Mail dataset. This dataset contains 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs. We use the same non-anonymized version of dataset as See et al. (2017) which requires no pre-processing1. The average number of sentences 1https://github.com/abisee/cnn-dailymail 2137 in documents and summaries are 42.1 and 3.8, respectively. We assume the length of all documents should not exceed 400 tokens and all summaries should not exceed 100 tokens. The word dictionary shared by documents and summaries contains 50,000 most popular tokens in documents. In our model, we set the number of encoder/decoder layers N = 4 and the number of heads h = 8. The dimensions of the signal representation de and output dm are set to 512, and the dimension of intermediate output df is set to 2048. Besides, the dropout rate is set to 0.8 in the process of training. We implement our model in PyTorch2 1.0. In all experiment, the batch size is set to 4096. We use the Adam optimizer (Kingma and Ba, 2014) to train our model with β1 = 0.9, β2 = 0.998 and ϵ = 10−8. The learning rate varies every step with the Noam decay strategy (Vaswani et al., 2017) and the warmup threshold is 8000. The maximum norm of gradient-clipping is set to 2. In the end, we conduct our experiment on one machine with 4 NVIDIA Titan Xp GPUs and the training process lasts 200,000 steps for each model. We use the beam search algorithm (Sutskever et al., 2014) with coverage technique (Tu et al., 2016) to generate multiple summary candidates in parallel to obtain better results, the coverage weight is set to 1. For fear of favoring shorter generated summaries, we utilize length penalty (Wu et al., 2016) during the process of inference. We set the beam size to 10, the length penalty parameter α to 0.9 and β to 5. The minimum length of the generated summary is set to 35 and the batch size for inference is set to 1. Following the previous studies, we use the ROUGE scores (Lin and Hovy, 2003) to evaluate the performance of our model with Python implementation3 and standard options. ROUGE scores measure the quality of summary by computing overlapping lexical units with references, such as uni-gram, bi-gram and longest common subsequence (LCS). F-measures ROUGE-1 (unigram), ROUGE-2 (bi-gram) and ROUGE-L (LCS) are reported as the evaluation metrics. 4.2 Baselines In this work, we compare our approach with these following state-of-the-art baselines: 2https://pytorch.org/ 3https://github.com/falcondai/pyrouge Models RG-1 RG-2 RG-L words-1vt2k-temp-att 36.64 15.66 33.42 PG+cov 39.53 17.28 36.38 ConvS2S 39.75 17.29 36.54 Explicit-Selection 41.54 18.18 36.47 ROUGEEsal+Ent 40.43 18.00 37.10 Bottom-Up 41.22 18.68 38.34 Basic model 39.45 17.20 36.49 +Focus-Attention 40.29 18.63 38.11 +Saliency-Selection 40.76 18.40 37.67 ETADS 41.75 19.01 38.89 Table 2: ROUGE scores on the CNN/Daliy Mail test set. All ROUGE scores have 95% confidence interval of at most ±0.24 computed by the official ROUGE script. To save space, we use “PG+cov” and “Bottom-Up” to denote the baseline “PointerGenerator+coverage” and “Bottom-Up Summarization”. The symbol “+” stands for the corresponding module is added on the “Basic model” which is a vanilla Transformer with 4 identical layers. words-1vt2k-temp-att: Nallapati et al. (2016) build this model with the basic seq2seq encoderdecoder architecture and attention mechanism, which is a pioneering effort for much other work. Pointer-generator+coverage: To deal with Out-Of-Vocabulary words (OOV words) and repeating problem, See et al. (2017) combine the pointer network into the RNN-based seq2seq model and design a coverage mechanism. ConvS2S: Gehring et al. (2017) creatively utilize convolution neural networks to build seq2seq model and achieve high performance on many tasks, including abstractive summarization. Explicit-Selection: Li et al. (2018b) propose to extend the basic seq2seq model with an information selection layer to explicitly control information flow. ROUGESal+Ent(RL): Pasunuru and Bansal (2018) address main difficulties via a reinforcement learning approach with two novel reward functions. Bottom-Up Summarization: This work combines extractive and abstractive summarization by firstly using a data-efficient content selector to over-determine phrase related (Gehrmann et al., 2018). 4.3 Results The experimental results are given in Table 2. Overall, ETADS achieves advantages of ROUGE 2138 F1 scores over all of the other baselines (reported in their own articles) and two extensions we proposed both improve the performances based on the basic model. Concretely, we design the focusattention mechanism to improve the capability of capturing the local context information and further encode the document comprehensively. Therefore, the basic model with focus-attention mechanism is expected to achieve improvement in producing summaries with continuous salient segments. The significant improvement on ROUGEL verifies our hypothesis. Besides, we notice that the improvements provided by the basic model with saliency-selection network particularly lie in ROUGE-1 F1 scores. We consider the reason may lie in the saliency-selection network is more sensitive to the short segments due to the separate saliency measuring process. Comparing with the two classical RNN-based baselines words-1vt2k-temp-att and Pointergenerator+coverage and one CNN-based baseline ConvS2S, our basic model is capable of achieving equivalent performance. We believe it should give credit to the capability of modeling long-term dependencies. When compared with more recent work, Explicit-Selection equips a selection layer similar to our saliencyselection network to mine salient information. Despite being aware of this problem, our saliencyselection network achieves better performance with the help of stacked architecture. The performance of reinforcement learning based model ROUGEEsal+Ent is worse than our model obviously. The strongest baseline Bottom-Up Summarization combines the advantages of CNNbased model and RNN-based model but is also slightly inferior to our model. 4.4 Case Study To further illustrate the effectiveness of our proposed ETADS vividly and analyze the reasons of improving the performance, we compare the generated summaries by baselines words-1vt2ktemp-att, Bottom-Up Summarization and our ETADS approach. For the case in Table 3, the input document focuses on analyzing the latest financial report of the Apple company and further discusses the impact of the new Apple Watch on retail revenue. The performance of words1vt2k-temp-att is unsatisfactory, three generated sentences are irrelevant to the main concepts and Reference summary: apple sold more than 61 million iphones in the quarter . apple did n’t report any results for the new apple watch . believed around 2 million watches have been sold , according to estimates . words-1vt2k-temp-att: the iphone is still the engine behind apple ’s phenomenal success . apple has vied with south korea ’s samsung for the no. 1 position in the global smartphone market . apple ceo tim cook has said he ’s optimistic about new markets such as [china china china china china ...] Bottom-Up Summarization: [apple sold more than 61 million iphones in the quarter] , accounting for more than two-thirds of its $ 58 billion in revenue for the quarter and the lion ’s share of $ 13.6 billion in profit - and up 40 % from a year ago . $ 200 billion in cash , up from around $ 150 billion for one year . revenue from mac computers rose 2 % to $ 5.6 billion . ETADS: [apple sold more than 61 million iphones in the quarter .] it was a 40 percent increase over the number of iphones sold in the first three months of 2014 . [apple did n’t report any results for the new apple watch] , which it began selling this month , after the quarter ended . Table 3: Example of generated summaries. We highlight the words or sentences in [red] which are consistent with partial reference summary. Repetition segments are marked in [blue] . even contains repetitions at the end of the summary. Abstractive summary generated by baseline Bottom-Up Summarization is much more better, which indicates the effectiveness of modifications. However, the generated summary only contains partial salient information of the document. ETADS achieves the best performance in this case due to two of the generated sentences containing salient information and without repetitions. The above results verify that the extensions in our model improve the capability of document summarization from not only quantitative but also qualitative perspectives. 4.5 Discussion In this section, we first validate the robustness of our model with different encoder/decoder architectures and then discuss the different deploy strategies for our extensions. 4.5.1 Architecture Robustness We conduct experiments to see how the model’s performance is affected by the stacked architecture. We perform a set of experiments which adjust the structures of the encoder and decoder to 2139 Encoder RG-1 RG-2 RG-L # of paras 2 layers 35.12 14.05 32.41 3190K * 4 layers 39.45 17.20 36.49 3821K 6 layers 39.67 17.47 35.71 4451K Decoder RG-1 RG-2 RG-L # of paras 2 layers 31.10 12.93 27.04 3406K 4 layers 39.45 17.20 36.49 4246K 6 layers 39.35 18.01 36. 21 5087K * 1K equals to 1000 Table 4: ROUGE scores on the CNN/Daily Mail test set. “# of paras” denotes the number of training parameters. We fix the decoder to 4 layers when adjust structure of the encoder and vice versa. Layers RG-1 RG-2 RG-L 40.87 17.78 37.73 [1-2] 42.81 20.12 39.68 [3-4] 41.91 19.65 39.32 [1-4] 43.06 20.85 40.12 Table 5: ROUGE precision scores on the CNN/Daliy Mail test set. We use the token “-” to indicate the basic model which does not contain saliency-selection network. “[1-2]” indicates we deploy saliency-selection network on the first and second layer of basic model, “[3-4]” and “[1-4]” are similar. 2, 4 and 6 layers respectively. Experimental results on the test set in Table 4 show that there is no notable difference between 4 layers or 6 layers for encoder or decoder. However, the number of parameters is significantly increased nearly 1/4 for 6 layers, which means more time is needed for convergence. Employing 2 layers for either the encoder or decoder leads to rapid performance degradation. From the aspect of efficiency and effectiveness, we decide to equip 4 layers for the encoder and decoder eventually. 4.5.2 Deployment Strategies In this section, we discuss the different deployment strategies for our extensions on the encoderdecoder framework. Firstly, we deploy the saliency-selection network on different layers to discuss strategies of saliency-selection deployment. As we mentioned before, the major difficulty of this salient information selection procedure is to comprehend the relative semantic meanings and make the correct selection, which significantly affects the precision scores. Therefore, it is proper to use precision Layers RG-1 RG-2 RG-L 41.10 17.82 37.91 [1-2] 40.92 18.61 38.22 [3-4] 40.57 18.20 38.19 [1-4] 41.31 18.72 38.93 Table 6: ROUGE recall scores on the CNN/Daliy Mail test set. “-” to indicate the basic model which does not contain focus-attention mechanism. Other symbols express same meaning with Table 5 scores to measure effectiveness. From Table 5, it can be observed that the improvements brought by our saliency-selection network do not increase with layers linearly. In the shallow layers, the saliency-selection network contributes to notable improvement which is close to the best results we achieved. However, for the deeper layers, the improvement brought by the saliency-selection network is limited. We believe it can be attributed to the characteristics of our encoder-decoder framework. Self-attention sub-layer effectively reduces the cost of long-term information fusion, which leads to difficult to comprehend the original semantic information. The saliency-selection network we proposed is not competent to distinguish noise information when the original semantic information becoming confusing. Furthermore, we discuss the strategies for focus-attention mechanism with ROUGE recall scores. The results of Table 6 demonstrate a similar phenomenon to Table 5 where improvements mainly come from shallow layers. We believe it is a trade-off between local context and global information. Focus-attention mechanism aims to gather attention to the local context around a center which deviates from the original goal. (Vaswani et al., 2017; Shi et al., 2016) indicate that there exists a consensus in the NLP community that shallow layers of a stacked model are sensitive to local context and deeper layers modeling global semantics. Therefore, as the module designed to capture local context, we believe it is reasonable to obtain more promotion where it is equipped on shallower layers which is also a side proof of effectiveness. 5 Conclusion In this paper, we propose a novel framework for abstractive document summarization with extended Transformer model. The proposed model consists of a concise pipeline. First, the stacked 2140 encoder with focus-attention mechanism captures long-term dependencies and local context of input document comprehensively. Then the decoder with saliency-selection network distinguishes and condenses the salient information into the output. Finally, an inference algorithm produces the abstractive summaries. Our experiments show that the proposed model achieves a significant improvement for abstractive document summarization over previous state-of-the-art baselines. Acknowledgments This work is supported by Chinese National Research Fund (NSFC) Key Project No. 61532013 and No. 61872239. FDCT/0007/2018/A1, DCTMoST Joint-project No. (025/2015/AMJ), University of Macau Grant Nos: MYRG2018-00237RTO, CPG2018-00032-FST and SRG201800111-FST of SAR Macau, China. References Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for document summarization. arXiv preprint arXiv:1610.08462. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 484–494. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Angela Fan, David Grangier, and Michael Auli. 2017. Controllable abstractive summarization. arXiv preprint arXiv:1711.05217. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. JMLR. org. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1631–1640. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Wojciech Kry´sci´nski, Romain Paulus, Caiming Xiong, and Richard Socher. 2018. Improving abstraction in text summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1808–1817. Piji Li, Lidong Bing, and Wai Lam. 2018a. Actorcritic based training framework for abstractive summarization. arXiv preprint arXiv:1803.11070. Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep recurrent generative decoder for abstractive text summarization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2091–2100. Wei Li, Xinyan Xiao, Yajuan Lyu, and Yuanzhuo Wang. 2018b. Improving neural abstractive document summarization with explicit information selection modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1787–1796. Wei Li, Xinyan Xiao, Yajuan Lyu, and Yuanzhuo Wang. 2018c. Improving neural abstractive document summarization with structural regularization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4078–4087. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Jeffrey Ling and Alexander Rush. 2017. Coarse-to-fine attention models for document summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 33–42. 2141 Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Ramakanth Pasunuru and Mohit Bansal. 2018. Multireward reinforced summarization with saliency and entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 646–653. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073–1083. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526– 1534. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1171–1181. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 76–85. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Baosong Yang, Zhaopeng Tu, Derek F Wong, Fandong Meng, Lidia S Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4449– 4458.
2019
205
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2142–2152 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2142 Unsupervised Neural Single-Document Summarization of Reviews via Learning Latent Discourse Structure and its Ranking Masaru Isonuma1 Junichiro Mori1,2 Ichiro Sakata1 1The University of Tokyo 2RIKEN {isonuma, isakata}@ipr-ctr.t.u-tokyo.ac.jp [email protected] Abstract This paper focuses on the end-to-end abstractive summarization of a single product review without supervision. We assume that a review can be described as a discourse tree, in which the summary is the root, and the child sentences explain their parent in detail. By recursively estimating a parent from its children, our model learns the latent discourse tree without an external parser and generates a concise summary. We also introduce an architecture that ranks the importance of each sentence on the tree to support summary generation focusing on the main review point. The experimental results demonstrate that our model is competitive with or outperforms other unsupervised approaches. In particular, for relatively long reviews, it achieves a competitive or better performance than supervised models. The induced tree shows that the child sentences provide additional information about their parent, and the generated summary abstracts the entire review. 1 Introduction The need for automatic document summarization is widely increasing because of the vast amounts of online textual data that continue to grow. As for product reviews on E-commerce websites, succinct summaries allow both customers and manufacturers to obtain large numbers of opinions (Liu and Zhang, 2012). Under these circumstances, supervised neural network models have achieved wide success, using a large number of reference summaries (Wang and Ling, 2016; Ma et al., 2018). However, a model trained on these summaries cannot be adopted in other domains, as salient phrases are not common across domains. It requires a significant cost to prepare large volumes of references for each domain (Isonuma et al., 2017). An unsupervised approach is a possible solution to such a problem. Previously, unsupervised learning has been widely applied to extractive approaches (Radev et al., 2004; Mihalcea and Tarau, 2004). As mentioned in (Carenini et al., 2013; Gerani et al., 2014), extractive approaches often fail to provide an overview of the reviews, while abstractive ones successfully condense an entire review via paraphrasing and generalization. Our work focuses on the one-sentence abstractive summarization of a single-review without supervision. The difficulties of unsupervised abstractive summarization are two-fold: obtaining the representation of the summaries, and learning a language model to decode them. As an unsupervised approach for multiple reviews, Chu and Liu (2018) regarded the mean of the document embeddings as the summary, while learning a language model via the reconstruction of each review. By contrast, such an approach cannot be extended to a single-review directly, because it also condenses including trivial or redundant sentences (its performance is demonstrated in Section 4.4). To overcome these problems, we apply the discourse tree framework. Extractive summarization and document classification techniques sometimes use a discourse parser to gain a concise representation of documents (Hirao et al., 2013; Bhatia et al., 2015; Ji and Smith, 2017); however, Ji and Smith (2017) pointed out the limitations of using external discourse parsers. In this context, Liu and Lapata (2018) proposed a framework to induce a latent discourse tree without a parser. While their model constructed the tree via a supervised document classification task, our model induces it by identifying and reconstructing a parent sentence from its children. Consequently, we gain the representation of a summary as the root of the induced latent discourse tree, while learning a language model through reconstruction. 2143 Good quality floor puzzle (1) This floor puzzle is a nice size not huge but larger than normal kid puzzles (2) The pieces are thick and lock together well even on carpet (5) My son put it together on berber carpet without having any issues with pieces not staying together (3) The pieces are cardboard but are very dense almost like wood but not quite that solid Summary: Body: (4) I bought this puzzle for my son for his first birthday at the store … … … … … … Figure 1: Example of the discourse tree of a jigsaw puzzle review. StrSum induces the latent tree and generates the summary from the children of a root, while DiscourseRank supports it to focus on the main review point. Figure 1 shows an example of a jigsaw puzzle review and its dependency-based discourse tree. The summary describes its quality. The child sentences provide an explanation in terms of the size (1st) and thickness (2nd), or provide the background (4th). Thus, we assume that reviews can generally be described as a multi-root non-projective discourse tree, in which the summary is the root, and the sentences construct each node. The child sentences present additional information about the parent sentence. To construct the tree and generate the summary, we propose a novel architecture; StrSum. It reconstructs a parent from its children recursively and induces a latent discourse tree without a parser. As a result, our model generates a summary from the surrounding sentences of the root while learning a language model through reconstruction in an endto-end manner. We also introduce DiscourseRank, which ranks the importance of each sentence in terms of the number of descendants. It supports StrSum to generate a summary that focuses on the main review point. The contributions of this work are three-fold: • We propose a novel unsupervised end-to-end model to generate an abstractive summary of a single product review while inducing a latent discourse tree • The experimental results demonstrate that our model is competitive with or outperforms other unsupervised models. In particular, for long reviews, it achieves a competitive or better performance than the supervised models. • The induced tree shows that the child sentences present additional information about their parent, and the generated summary abstracts for the entire review. 2 Proposed Model In this section, we present our unsupervised endto-end summarization model with descriptions of StrSum and DiscourseRank. 2.1 StrSum: Structured Summarization Model Training: The outline of StrSum is presented in Figure 2. yi and si ∈Rd indicate the i-th sentence and its embedding in a document D = {y1, . . . , yn}, respectively. wt i is the t-th word in a sentence yi = {w1 i , . . . , wl i}. si is computed via a max-pooling operation across hidden states ht i ∈Rd of the Bi-directional Gated Recurrent Units (Bi-GRU): −→ h t i = −−−→ GRU(−→ h t−1 i , wt i) (1) ←− h t i = ←−−− GRU(←− h t+1 i , wt i) (2) ht i = [−→ h t i, ←− h t i] (3) ∀m ∈{1, . . . , d}, si,m = max t ht i,m (4) Here, we assume that a document D and its summary compose a discourse tree, in which the root is the summary, and all sentences are the nodes. We denote aij as the marginal probability of dependency where the i-th sentence is the parent node of the j-th sentence. In particular, a0j denotes the probability that a root node is the parent (see Figure 2). We define the probability distribution aij (i ∈{0, . . . , n}, j ∈{1, . . . , n}) as the posterior marginal distributions of a nonprojective dependency tree. The calculation of the marginal probability is explained later. Similar to (Liu and Lapata, 2018), to prevent overload of the sentence embeddings, we decompose them into two parts: [se i, sf i ] = si (5) 2144 yi si : generated summary (output) : i-th sentence (input) : i-th sentence embedding : i-th sentence (generated) : i-th sentence embedding (generated) : marginal probability where the i-th sentence is the parent of the j-th sentence aij y0 yi si ^ ^ s0 s1 si sj s1 si sj a01 aij ^ ^ ^ ≈ y1 yi yj ≈ ≈ : : : : : : y1 yi yj : : y1 yi yj : : encoding decoding Children Parents y0 : : : : : ^ ^ ^ ^ ^ ^ Figure 2: Outline of StrSum. where the semantic vector se i ∈Rde encodes the semantic information, and the structure vector sf i ∈Rdf is used to calculate the marginal probability of dependencies. The embedding of the parent sentence ˆsi and that of the summary ˆs0 are defined with parameters Ws ∈Rde∗de and bs ∈Rde as: ˆsi = tanh { Ws( n ∑ j=1 aijse j) + bs } (6) Using ˆsi, the GRU-decoder learns to reconstruct the i-th sentence, i.e., to obtain the parameters θ that maximize the following log likelihood: n ∑ i=1 l ∑ t=1 log P(wt i|w<t i , ˆsi, θ) (7) Summary Generation: An explanation of how the training contributes to the learning of a language model and the gaining of the summary embedding is provided here. As for the former, the decoder learns a language model to generate grammatical sentences by reconstructing the document sentences. Therefore, the model can appropriately decode the summary embedding to ˆy0. As for the latter, if the j-th sentence contributes to generating the i-th one, aij get to be higher. This mechanism models our assumption that child sentences can generate their parent sentence, but not vice versa, because the children present additional information about their parent. Hence, the most concise k-th sentences (e.g., the 1st, 2nd, and 4th in Figure 1), provide less of a contribution to the reconstruction of any other sentences. Thus, aik get to be lower for ∀i : i ̸= 0. Because aik satisfies the constraint ∑n i=0 aik =1, a0k is expected to be larger, and thus the k-th sentence contributes to the construction of the summary embedding ˆs0. Marginal Probability of Dependency: The calculation of the marginal probability of dependency, aij, is explained here. We first define the weighted adjacency matrix F = (fij) ∈ R(n+1)∗(n+1), where the indices of the first column and row are 0, denoting the root node. fij denotes the un-normalized weight of an edge between a parent sentence i and its child j. We define it as a pair-wise attention score following (Liu and Lapata, 2018). By assuming a multi-root discourse tree, fij is defined as: fij =      exp(w⊤ r sf j ) (i = 0 ∧j ≥1) exp(p⊤ i Wfcj) (i ≥1 ∧j ≥1 ∧i ̸= j) 0 (j = 0 ∨i = j) (8) pi = tanh(Wpsf i + bp) (9) cj = tanh(Wcsf j + bc) (10) where Wf ∈Rdf∗df and wr ∈Rdf are parameters for the transformation. Wp ∈Rdf∗df and bp ∈Rdf are the weight and bias respectively, for constructing the representation of the parent nodes. Wc ∈Rdf∗df and bc ∈Rdf correspond to those of the child nodes. We normalize fij into aij based on (Koo et al., 2007). aij corresponds to the proportion of the total weight of the spanning trees containing an edge (i, j): aij(F ) = ∑ t∈T:(i,j)∈t v(t|F ) ∑ t∈T v(t|F ) (11) = ∂log Z(F ) ∂fij (12) v(t|F ) = ∏ (i,j)∈t fij (13) Z(F ) = ∑ t∈T v(t|F ) (14) 2145 where T denotes the set of all spanning trees in a document D. v(t|F ) is the weight of a tree t ∈T, and Z(F ) denotes the sum of the weights of all trees in T. From the Matrix-Tree Theorem (Tutte, 1984), Z(F ) can be rephrased as: Z(F ) = |L0(F )| (15) where L(F ) ∈R(n+1)∗(n+1) and L0(F ) ∈Rn∗n are the Laplacian matrix of F and its principal submatrix formed by deleting row 0 and column 0, respectively. By solving Eq. 12, aij is given by: a0j = f0j [ L−1 0 (F ) ] jj (16) aij = fij [ L−1 0 (F ) ] jj −fij [ L−1 0 (F ) ] ji (17) 2.2 DiscourseRank StrSum generates the summary under the large influence of the child sentences of the root. Therefore, sentences that are not related to the rating (e.g., the 4th in Figure 1) also affect the summary and can be considered noise. Here, we assume that meaningful sentences (e.g., the 1st and 2nd in Figure 1) typically have more descendants, because many sentences provide the explanation of them. Hence, we introduce the DiscourseRank to rank the importance of the sentences in terms of the number of descendants. Inspired by PageRank (Page et al., 1999), the DiscourseRank of the root and n sentences at the t-th iteration rt = [r0, . . . , rn] ∈R(n+1) is defined as: rt+1 = λ ˆ Art + (1 −λ)v (18) ˆaij =      0 (i = 0 ∧j = 0) 1 n (i ≥1 ∧j = 0) aij (j ≥1) (19) where ˆ A = (ˆaij) ∈R(n+1)∗(n+1) denotes the stochastic matrix for each dependency, λ is a damping factor, and v ∈R(n+1) is a vector with all elements equal to 1/(n + 1). Eq.18 implies that ri reflects rj more if the i-th sentence is more likely to be the parent of the j-th sentence. The r solution and updated score of the edge (0, j) ¯a0j (j ∈{1, . . . , n}) are calculated by: r = (1 −λ)(I −λ ˆ A)−1v (20) ¯a0j = a0jrj (21) The updated score ¯a0j is used to calculate the summary embedding ˆs0 instead of Eq.16. As a result, the generated summary reflects the sentences with a higher marginal probability of dependency on the root, while focusing on the main review point. 3 Related work 3.1 Supervised Review Summary Generation Several previous studies have addressed abstractive summarization for product reviews (Carenini et al., 2013; Di Fabbrizio et al., 2014; Bing et al., 2015; Yu et al., 2016); however, their output summaries are not guaranteed to be grammatical (Wang and Ling, 2016). Neural sequenceto-sequence models have improved the quality of abstractive summarization. Beginning with the adaptation to sentence summarization (Rush et al., 2015; Chopra et al., 2016), several studies have tackled the generation of an abstractive summary of news articles (Nallapati et al., 2016; See et al., 2017; Tan et al., 2017; Paulus et al., 2018). With regard to product reviews, the neural sequenceto-sequence based model (Wang and Ling, 2016) and joint learning with sentiment classification (Ma et al., 2018; Wang and Ren, 2018) have improved the performance of one-sentence summarization. Our work is also based on the neural sequence-to-sequence model, while introducing the new concept of generating the summary by recursively reconstructing a parent sentence from its children. 3.2 Unsupervised Summary Generation Although supervised abstractive summarization has been successfully improved, unsupervised techniques have still not similarly matured. Ganesan et al. (2010) proposed Opinosis, a graphbased method for generating review summaries. Their method is word-extractive, rather than abstractive, because the generated summary only contains words that appear in the source document. With the recently increasing number of neural summarization models, Miao and Blunsom (2016) applied a variational auto-encoder for semi-supervised sentence compression. Chu and Liu (2018) proposed MeanSum, an unsupervised neural multi-document summarization model for reviews. However, their model is not aimed at generating a summary from a single document and could not directly be extended. Although several previous studies (Fang et al., 2016; Dohare et al., 2018) have used external parsers for unsupervised abstractive summarization, our work, to the best of our knowledge, proposes the first unsupervised abstractive summarization method for a single product review that does not require an external parser. 2146 3.3 Discourse Parsing and its Applications Discourse parsing has been extensively researched and used for various applications. Hirao et al. (2013); Kikuchi et al. (2014); Yoshida et al. (2014) transformed a rhetorical structure theory-based discourse tree (RST-DT; Mann and Thompson, 1988) into a dependencybased discourse tree and regarded the root and the surrounding elementary discourse units as a summary. Gerani et al. (2014) constructed a discourse tree and ranked the aspects of reviews for summarization. Bhatia et al. (2015); Ji and Smith (2017) also constructed a dependency-based discourse tree for document classification. Ji and Smith (2017) pointed out the limitations of using external parsers, demonstrating that the performance depends on the amount of the RST-DT and the domain of the documents. Against such a background, Liu and Lapata (2018) proposed a model that induces a latent discourse tree without an external corpus. Inspired by structure bias (Cheng and Lapata, 2016; Kim et al., 2017), they introduced Structured Attention, which normalizes attention scores as the posterior marginal probabilities of a nonprojective discourse tree. The probability distribution of Structured Attention implicitly represents a discourse tree, in which the child sentences present additional information about their parent. We extend it to the unsupervised summarization, i.e., obtaining a summary as the root sentence of a latent discourse tree. While Liu and Lapata (2018) introduce a virtual root sentence and induce a latent discourse tree via supervised document classification, we generate a root sentence via reconstructing a parent sentence from its children without supervision. 4 Experiments In this section, we present our experiments for the evalation of the summary generation performance of online reviews. The following section provides the details of the experiments and results. 1 4.1 Dataset Our experiments use the Amazon product review dataset (McAuley et al., 2015; He and McAuley, 2016), which contains Amazon online reviews and their one-sentence summaries. It includes 142.8 1The code to reproduce the results is available at: https://github.com/misonuma/strsum Domains Train Valid Eval Toys & Games 27,037 498 512 Sports & Outdoors 37,445 511 466 Movies & TV 408,827 564 512 Table 1: Number of reviews for training (Train), validation (Valid) and evaluation (Eval). million reviews spanning May 1996 - July 2014. Ma et al. (2018); Wang and Ren (2018) used this dataset for the evaluation of their supervised summary generation model. The same domains considered in their previous work are selected for this study; Toys & Games, Sports & Outdoors, and Movies & TV. Because our model is trained by identifying and reconstructing a parent sentence from its children, it sometimes fails to construct an appropriate tree for relatively short reviews. It also has a negative influence on summary generation. Therefore, we use reviews with 10 or more sentences for training, and those with 5 or more sentences for validation and evaluation. Table 1 indicates the number of reviews in each domain. 4.2 Experimental Details The source sentences and the summaries share the same vocabularies, which are extracted from the training sources of each domain. We limit a vocabulary to the 50, 000 most frequent words appearing in training sets. The hyper-parameters are tuned based on the performance using the reference summaries in validation sets. We set 300-dimensional word embeddings and initialize them with pre-trained FastText vectors (Joulin et al., 2017). The encoder is a single-layer Bi-GRU with 256-dimensional hidden states for each direction and the decoder is a uni-directional GRU with 256-dimensional hidden states.  The damping factor of DiscourseRank is 0.9. We train the model using Ada-grad with a learning rate of 10−1, an initial accumulator value of 10−1, and a batch size of 16. At the evaluation time, a beam search with a beam size of 10 is used. Similar to (See et al., 2017; Ma et al., 2018), our evaluation metric is the ROUGE-F1 score (Lin, 2004), computed by the pyrouge package. We use ROUGE-1, ROUGE-2, and ROUGE-L, which measure the word-overlap, bigram-overlap, and longest common sequence between the reference and generated summaries, respectively. 2147 Domain Toys & Games Sports & Outdoors Movies & TV Metric R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L Unuspervised approaches TextRank 8.63 1.24 7.26 7.16 0.89 6.39 8.27 1.44 7.35 Opinosis 8.25 1.51 7.52 7.04 1.42 6.45 7.80 1.20 7.11 MeanSum-single 8.12 0.58 7.30 5.42 0.47 4.97 6.96 0.35 6.08 StrSum 11.61 1.56 11.04 9.15 1.38 8.79 7.38 1.03 6.94 StrSum+DiscourseRank 11.87 1.63 11.40 9.62 1.58 9.28 8.15 1.33 7.62 Supervised baselines Seq-Seq 13.50 2.10 13.31 10.69 2.02 10.61 7.71 2.18 7.08 Seq-Seq-att 16.28 3.13 16.13 11.49 2.39 11.47 9.05 2.99 8.46 Table 2: ROUGE F1 score of the evaluation set (%). R-1, R-2 and R-L denote ROUGE-1, ROUGE-2, and ROUGE-L, respectively. The best performing model among unsupervised approaches is shown in boldface. 4.3 Baseline For the comparisons, two unsupervised baseline models are employed. A graph-based unsupervised sentence extraction method, TextRank is employed (Mihalcea and Tarau, 2004), where sentence embeddings are used instead of bag-ofwords representations, based on (Rossiello et al., 2017). As an unsupervised word-level extractive approach, we employ Opinosis (Ganesan et al., 2010), which detects salient phrases in terms of their redundancy. Because we observe repetitive expressions in the dataset, Opinosis is added as a baseline. Both methods extract or generate a onesentence summary. Furthermore, a third, novel unsupervised baseline model MeanSum-single is introduced, which is an extended version of the unsupervised neural multi-document summarization model (Chu and Liu, 2018). While it decodes the mean of multiple document embeddings to generate the summary, MeanSum-single generates a single-document summary by decoding the mean of the sentence embeddings in a document. It learns a language model through reconstruction of each sentence. By comparing with MeanSumsingle, we verify that our model focuses on the main review points, and does not simply take the average of the entire document. As supervised baselines, we employ vanilla neural sequence-to-sequence models for abstractive summarization (Hu et al., 2015), following previous studies (Ma et al., 2018; Wang and Ren, 2018). We denote the model as Seq-Seq and that with the attention mechanism as Seq-Seq-att. The encoder and decoder used are the same as those used in our model. -14 15-29 30Number of sentences in each document 0 5 10 15 20 ROUGE-L F1 11.01 11.05 16.06 10.19 12.62 16.20 14.32 15.56 15.87 Toys & Games -14 15-29 30Number of sentences in each document 0 5 10 15 20 ROUGE-L F1 8.53 9.09 11.82 10.70 10.48 9.85 11.14 11.78 7.03 Sports & Outdoors -14 15-29 30Number of sentences in each document 0 5 10 15 20 ROUGE-L F1 6.33 7.24 8.25 7.80 8.40 10.08 9.93 8.63 5.45 Movies & TV : StrSum+DiscourseRank : Seq-Seq-att : StrSum Figure 3: ROUGE-L F1 score on evaluation set with various numbers of sentences. 4.4 Evaluation of Summary Generation Table 2 shows the ROUGE scores of our models and the baselines for the evaluation sets.2 With regards to Toys & Games and Sports & Outdoors, our full model (StrSum + DiscourseRank) achieves the best ROUGE-F1 scores among the unsupervised approaches. As for ROUGE-1 and ROUGE-L, two-tailed t-tests demonstrate that the 2As Yu et al. (2016); Ma et al. (2018) reported, the reviews and their summaries are usually colloquial and contain more noise than news articles. Therefore, the ROUGE scores on the Amazon review dataset are lower than those obtained for other summarization datasets, such as DUC. 2148 • Reference: love this game • Seq-Seq-att: fun game • Our Model (Full): i love this game • Reference: good value • Seq-Seq-att: good for the price • Our Model (Full) : this is a great product for the price Generated Summary (a) (b) • Reference: disappointing • Seq-Seq-att: great dvd • Our Model (Full) : this is a great movie (c) 1. I love this game 2. It is so much fun 3. I’m all about new and different games 4. I love to play this with my brother because he is very bad at keeping score so I win most of the time and he loves to tell each characters story 5. And he loves to tell each characters story and to tell why each person got what fate 6. It’s a must buy if you want a fun and fast card game 1. have not used it yet at the campground but tested it at home and works fine 2. use a toothpick to hold the valve open so you can deflate it easily 3. if you sit on it and your butt just touches the ground your at the right pressure 4. for the price i would recommend it for occasional use 5. if your a hard core camper you may want a name brand 6. it suits my needs perfectly Induced Discourse Tree Sentences in the Main Body 1. this had so much potential 2. my favorite 3 guitarist yet the sound is muddied 3. it should have been recorded in 5 4. the video is good 5. the sound is horrible though and that 's what makes this a travesty 6. i am so disappointed as for concert dvds audio is the most important factor 7. not even anamorphic root 1 7 5 6 4 3 2 root 1 6 2 5 4 3 root 2 3 1 4 6 5 Figure 4: Examples of generated summaries and induced latent discourse trees. difference between our models and the others are statistically significant (p < 0.05). Because the abstractive approach generates a concise summary by omitting trivial phrases, it can lead to a better performance than those of the extractive ones. On the other hand, for Movies & TV, our model is competitive with other unsupervised extractive approaches; TextRank and Opinosis. One possible explanation is that the summary typically includes named entities, such as the names of characters, actors and directors, which may lead to a better performance of the extractive approaches. For all datasets, our full model outperforms the one using only StrSum. Our models significantly outperform MeanSum-single, indicating that our model focuses on the main review points, and does not simply take the average of the entire document. Figure 3 shows the ROUGE-L F1 scores of our models on the evaluation sets with various numbers of sentences compared to the supervised baseline model (Seq-Seq-att). For the case of a dataset with less than 30 sentences, the performance of our models is inferior to that of the supervised baseline model. Because our full model generates summaries via learning the latent discourse tree, it sometimes fails to construct a tree, and thus experiences a decline in performance for relatively short reviews. On the other hand, for datasets with the number of sentences exceeding 30, our model achieves competitive or better performance than the supervised model. 5 Discussion 5.1 Analysis of the Induced Structure Figure 4 presents the generated summary and the latent discourse tree induced by our full model. We obtained the maximum spanning tree from the probability distribution of dependency, using Chu–Liu–Edmonds algorithm (Chu, 1965; Edmonds, 1967). Figure 4(a) shows the summary and the latent discourse tree for a board game review. Our model generates the summary, ”i love this game”, which is almost identical to the reference. The induced tree shows that the 2nd sentence elaborates on the generated summary, while the 3rd sentence provides its background. The 4th and 5th sentences explain the 1st sentence in detail, i.e., describe why the author loves the game. Figure 4(b) shows the summary and latent discourse tree of a camping mattress review. Although there is no word-overlap between the reference and generated summary, our model focuses on the positivity in terms of the price. On the induced tree, the 1st to 3rd sentences provide a background of the summary and mention the high quality of the product. The 6th sentence indicates that reviewer is satisfied, while the 4th sentence provides its explanation with regards to the price. In Figure 4(c), we present a failure example of a review of a concert DVD. The reviewer is disappointed by the poor quality of the sound; however 2149 Toys & Games StrSum StrAtt Projective 38.58% 66.07% Height 3.06 2.42 Sports & Outdoors StrSum StrAtt Projective 41.26% 58.85% Height 2.72 2.50 Movies & TV StrSum StrAtt Projective 36.31% 61.20% Height 3.63 2.37 Table 3: Descriptive statistics for induced latent discourse trees. StrAtt denotes the Structured Attention Model (Liu and Lapata, 2018). our model generates a positive summary, ”this is a great movie”. The induced tree shows that the sentences describing the high potential (1st), quality of the video (4th), and preference to the picture (7th), all affect the summary generation. Our model regards the sound quality as a secondary factor to that of the video. Therefore, it fails to prioritize the contrasting aspects; the sound and the video, and generates an inappropriate summary. DiscourseRank cannot work well on this example, because the numbers of sentences mentioning each aspect are not significantly different. To solve such a problem, the aspects of each product must be ranked explicitly, such as in (Gerani et al., 2014; Angelidis and Lapata, 2018). Table 3 summarizes the characteristics of the induced latent discourse trees. These are compared with those obtained by the Structured Attention model, StrAtt (Liu and Lapata, 2018). StrAtt induces single-root trees via the document classification task based on the review ratings. For each domain, our model induces more non-projective trees than StrAtt. Additionally, the height (the average maximum path length from a root to a leaf node) is larger than that of StrAtt. Our model estimates the parent of all the sentences and can induce deeper trees in which the edges connect trivial sentences. On the other hand, StrAtt identifies salient sentences required for the document classification, and thus induces shallow trees that connect the salient sentences and others. As our model prevents the summary from focusing on trivial or redundant sentences by inducing deep and complex trees, it specifically achieves higher performance when considering relatively long reviews. (a) (b) Figure 5: Visualization of DiscourseRank. The darker the highlightning, the higher the rank score. The references and generated summaries are also shown. 5.2 DiscourseRank Analysis In this section, we demonstrate how DiscourseRank affects the summary generation. Figure 5 visualizes the sentences in the main body and their DiscourseRank scores. We highlight the sentences that achieve a high DiscourseRank score with a darker color. A review of a car coloring book is presented in Figure 5(a). As expected, the score of the 1st sentence is low, which is not related to the review evaluations, that is, DiscourseRank emphasizes the evaluative sentences, such as the 2nd and 6th sentences. A review of swimming goggles is presented in Figure 5(b). The reviewer is satisfied with the quality of the product. The highlighting shows that DiscourseRank focuses on the sentences that mention leaking (e.g., the 2nd and 5th). While our model (with only StrSum) emphasizes the price sufficiency, DiscourseRank generates a summary describing that there is no issue with the quality. 6 Conclusion In this work, we proposed a novel unsupervised end-to-end model to generate an abstractive summary of a single product review while inducing a latent discourse tree. The experimental results demonstrated that our model is competitive with or outperforms other unsupervised approaches. In 2150 particular, for relatively long reviews, our model achieved competitive or better performance compared to supervised models. The induced tree shows that the child sentences present additional information about their parent, and the generated summary abstracts the entire review. Our model can also be applied to other applications, such as argument mining, because arguments typically have the same discourse structure as reviews. Our model can not only generates the summary but also identifies the argumentative structures. Unfortunately, we cannot directly compare our induced trees with the output of a discourse parser, which typically splits sentences into elementary discourse units. In future work, we will make comparisons with those of a humanannotated dataset. Acknowledgments We would like to thank anonymous reviewers and members of the Sakata&Mori Laboratory at the Graduate School of Engineering for their valuable feedback. This work was supported by CREST, JST, the New Energy and Industrial Technology Development Organization (NEDO) and Deloitte Tohmatsu Financial Advisory LLC. References Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3675–3686. Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from rst discourse parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2212–2218. Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca Passonneau. 2015. Abstractive multidocument summarization via phrase selection and merging. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, volume 1, pages 1587–1597. Giuseppe Carenini, Jackie Chi Kit Cheung, and Adam Pauls. 2013. Multi-document summarization of evaluative text. Computational Intelligence, 29(4):545–576. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 484– 494. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98. Eric Chu and Peter J Liu. 2018. Unsupervised neural multi-document abstractive summarization. Computing Research Repository, arXiv:1810.05739v3. Version 3. Yoeng-Jin Chu. 1965. On the shortest arborescence of a directed graph. Scientia Sinica, 14:1396–1400. Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multidocument summarization of opinions in reviews. In Proceedings of the 8th International Natural Language Generation Conference, pages 54–63. Shibhansh Dohare, Vivek Gupta, and Harish Karnick. 2018. Unsupervised semantic abstractive summarization. In Proceedings of ACL 2018, Student Research Workshop, pages 74–83. Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards B, 71:233–240. Yimai Fang, Haoyue Zhu, Ewa Muszy´nska, Alexander Kuhnle, and Simone Teufel. 2016. A propositionbased abstractive summariser. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics, pages 567–578. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 340–348. Shima Gerani, Yashar Mehdad, Giuseppe Carenini, Raymond T Ng, and Bita Nejat. 2014. Abstractive summarization of product reviews using discourse structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1602–1613. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, pages 507–517. Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single-document summarization as a tree knapsack problem. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1515–1520. 2151 Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lcsts: A large scale chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967–1972. Masaru Isonuma, Toru Fujino, Junichiro Mori, Yutaka Matsuo, and Ichiro Sakata. 2017. Extractive summarization using multi-task learning with document classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2101–2110. Yangfeng Ji and Noah A Smith. 2017. Neural discourse structure for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 996–1005. Armand Joulin, Edouard Grave, and Piotr Bojanowski Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, volume 2, pages 427–431. Yuta Kikuchi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, and Masaaki Nagata. 2014. Single document summarization based on nested tree structure. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2, pages 315–320. Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. 2017. Structured attention networks. In Proceedings of the 5th International Conference on Learning Representations. Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 141–150. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of the Workshop on Text Summarization Branches Out, volume 8. Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Mining text data, pages 415–463. Springer. Yang Liu and Mirella Lapata. 2018. Learning structured text representations. Transactions of the Association of Computational Linguistics, 6:63–75. Shuming Ma, Xu Sun, Junyang Lin, and Xuancheng Ren. 2018. A hierarchical end-to-end model for jointly improving text summarization and sentiment classification. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4251–4257. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 43–52. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 319–328. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404–411. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations. Dragomir R Radev, Hongyan Jing, Małgorzata Sty´s, and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing & Management, 40(6):919–938. Gaetano Rossiello, Pierpaolo Basile, and Giovanni Semeraro. 2017. Centroid-based text summarization through compositionality of word embeddings. In Proceedings of the MultiLing Workshop on Summarization and Summary Evaluation Across Source Types and Genres, pages 12–21. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1073–1083. 2152 Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1171–1181. William Thomas Tutte. 1984. Graph theory, volume 21. Addison-Wesley. Hongli Wang and Jiangtao Ren. 2018. A self-attentive hierarchical model for jointly improving text summarization and sentiment classification. In Proceedings of the 10th Asian Conference on Machine Learning, pages 630–645. Lu Wang and Wang Ling. 2016. Neural network-based abstract generation for opinions and arguments. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 47–57. Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based discourse parser for single-document summarization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1834–1839. Naitong Yu, Minlie Huang, Yuanyuan Shi, and Zhu Xiaoyan. 2016. Product review summarization by exploiting phrase properties. In Proceedings of the 26th International Conference on Computational Linguistics, pages 1113–1124.
2019
206
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2153–2162 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2153 BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization Kai Wang Sun Yat-sen University [email protected] Xiaojun Quan∗ Sun Yat-sen University [email protected] Rui Wang Alibaba Inc. [email protected] Abstract The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its summarization process. Extensive experiments on a standard summarization dataset were conducted and the results show that the template-equipped BiSET model manages to improve the summarization performance significantly with a new state of the art. 1 Introduction Abstractive summarization aims to shorten a source article or paragraph by rewriting while preserving the main idea. Due to the difficulties in rewriting long documents, a large body of research on this topic has focused on paragraph-level article summarization. Among them, sequence-tosequence models have become the mainstream and some have achieved state-of-the-art performance (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016). In general, the only available information for these models during decoding is simply the source article representations from the encoder and the generated words from the previous time steps (Nallapati et al., 2016; Gu et al., 2016; Lin et al., 2018), while the previous words are also generated based on the article representations. Since natural language text is complicated and verbose in nature, and training data is insufficient in size to help the models distinguish important article information from noise, sequence-to∗Corresponding author. sequence models tend to deteriorate with the accumulation of word generation, e.g., they generate irrelevant and repeated words frequently (Koehn and Knowles, 2017). Template-based summarization (Zhou and Hovy, 2004) is an effective approach to traditional abstractive summarization, in which a number of hard templates are manually created by domain experts, and key snippets are then extracted and populated into the templates to form the final summaries. The advantage of such approach is it can guarantee concise and coherent summaries in no need of any training data. However, it is unrealistic to create all the templates manually since this work requires considerable domain knowledge and is also labor-intensive. Fortunately, the summaries of some specific training articles can provide similar guidance to the summarization as hard templates. Accordingly, these summaries are referred to as soft templates, or templates for simplicity, in this paper. Despite their potential in relieving the verbosity and insufficiency problems of natural language data, templates have not been exploited to full advantage. For example, Cao et al. (2018a) simply concatenated template encoding after the source article in their summarization work. To this end, we propose a Bi-directional Selective Encoding with Template (BiSET) model for abstractive sentence summarization. Our model involves a novel bi-directional selective layer with two gates to mutually select key information from an article and its template to assist with summary generation. Due to the limitations in obtaining handcrafted templates, we further propose a multi-stage process for automatic retrieval of high-quality templates from training corpus. Extensive experiments were conducted on the Gigaword dataset (Rush et al., 2015), a public dataset widely used for abstractive sentence summarization, and the results appear to 2154 be quite promising. Merely using the templates selected by our approach as the final summaries, our model can already achieve superior performance to some baseline models, demonstrating the effect of our templates. This may also indicate the availability of many quality templates in the corpus. Secondly, the template-equipped summarization model, BiSET, outperforms all the state-ofthe-art models significantly. To evaluate the importance of the bi-directional selective layer and the two gates, we conducted an ablation study by discarding them respectively, and the results show that, while both of the gates are necessary, the template-to-article (T2A) gate tends to be more important than the article-to-template (A2T) gate. A human evaluation further validates the effectiveness of our model in generating informative, concise and readable summaries. The contributions of this work include: • We propose a novel bi-directional selective mechanism with two gates to mutually select important information from both article and template to assist with summary generation. • We develop a Fast Rerank method to automatically select high-quality templates from training corpus. • Empirical evaluations on the benchmark dataset show our model has achieved a new state of the art. • The source code of this work has been released for future research.1 2 The Framework Our framework includes three key modules: Retrieve, Fast Rerank, and BiSET. For each source article, Retrieve aims to return a few candidate templates from the training corpus. Then, the Fast Rerank module quickly identifies a best template from the candidates. Finally, BiSET mutually selects important information from the source article and the template to generate an enhanced article representation for summarization. 2.1 Retrieve This module starts with a standard information retrieval library2 to retrieve a small set of candidates for fine-grained filtering as Cao et al. (2018a). To do that, all non-alphabetic characters (e.g., dates) 1https://github.com/InitialBug/BiSET 2https://lucene.apache.org are removed to eliminate their influence on article matching. The retrieval process starts by querying the training corpus with a source article to find a few (5 to 30) related articles, the summaries of which will be treated as candidate templates. 2.2 Fast Rerank The above retrieval process is essentially based on superficial word matching and cannot measure the deep semantic relationship between two articles. Therefore, the Fast Rerank module is developed to identify a best template from the candidates based on their deep semantic relevance with the source article. We regard the candidate with highest relevance as the template. As illustrated in Figure 1, this module consists of a Convolution Encoder Block, a Similarity Matrix and a Pooling Layer. Convolution Encoder Block. This block maps the input article and its candidate templates into high-level representations. The popular ways to this are either by using recurrent neural network (RNN) or a stack of convolutional neural network (CNN), while none of them are suitable for our problem. This is because a source article is usually much longer than a template, and both RNN and CNN may lead to semantic irrelevance after encodings. Instead, we implement a new convolution encoder block which includes a word embedding layer, a 1-D convolution followed by a non-linearity function, and residual connections (Gehring et al., 2017). Formally, given word embeddings {ei}E i=1 ∈ Rd of an article, we use a 1-D convolution with kernel k ∈R2d×kd and bias bh ∈R2d to extract the n-gram features: hi = k[ei−k/2, ..., ei+k/2] + bh (1) where hi ∈R2d. We pad both sides of an article/template with zeros to keep fixed length. After that, we employ the gated linear unit (GLU) (Dauphin et al., 2017) as our activation function to control the proportion of information to pass through. GLU takes half the dimension of hi as input and reduces the input dimension to d. Let hi = [h1 i ; h2 i ], where h1 i , h2 i ∈Rd, we have: ri = GLU(hi) = GLU([h1 i ; h2 i ]) = h1 i ⊗σ(h2 i ) (2) where ri ∈Rd, σ is the sigmoid function, and ⊗ means element-wise multiplication. To retain the original information, we add residual connections 2155 Figure 1: Overview of the Fast Rerank Module. from the input of the convolution layer to the output of this block: zi = ri + ei. Similarity Matrix. The above encoder block generates a high-level representation for each source article/candidate template. Then, a similarity matrix S ∈Rm×n is calculated for a given article representation, S ∈Rm×d, and a template representation, T ∈Rn×d: sij = f(Si, Tj) (3) where f is the similarity function, and the common options for f include: f(x, y) =      xT y, dot product xT Wy, bilinear function ∥x −y∥, Euclidean distance (4) Most previous work uses dot product or bilinear function (Chen et al., 2016) for the similarity, yet we find the family of Euclidean distance perform much better for our task. Therefore, we define the similarity function as: f(x, y) = exp(−∥x −y∥2) (5) Pooling Layer. This layer is intended to filter out unnecessary information in the matrix S. Before applying such pooling operations as max-pooling and k-max pooling (Kalchbrenner et al., 2014) over the similarity matrix, we note there are repeated words in the source article, which we only want to count once. For this reason, we first identify some salient weights from S: q = maxcolumn(S) (6) where maxcolumn is a column-wise maximum function. We then apply k-max pooling over q to select k most important weights, p ∈Rk. Finally, we apply a two-layer feed-forward network to output a similarity score for the source article and the candidate template: p = k-max(q) (7) a = ReLU(Wap + b1) (8) s = σ(Wsa + b2) (9) 2.3 Traditional Methodologies In this section, we explore three traditional approaches to taking advantage of the templates for summarization. They share the same encoder and decoder layers, but own different interaction layers for combination of a source article and template. The encoder layer uses a standard bi-directional RNN (BiRNN) to separately encode the source article and the template into hidden states hs i and ht j. Concatenation. This approach directly concatenates the hidden state,  ht i N i=1, of a template after the article representation, {hs i}M i=1, to form a new article representation, {zs i }M+N i=1 . This approach is similar to R3Sum (Cao et al., 2018a) but uses our Fast Rerank and summary generation modules. Concatenation+Self-Attention. This approach adds a multi-head self-attention (Vaswani et al., 2017) layer with 4 heads on the basis of the above direct concatenation. DCN Attention. Initially introduced for machine reading comprehension (Seo et al., 2017), this interaction approach is employed here to create template-aware article representations. First, we compute a similarity matrix, S ∈Rm×n, for each pair of article and template words by sij = W0[hs i; ht j; hs i ⊗ht j], where ‘;’ is the concatenation operation. We then normalize each row and col2156 (a) (b) Figure 2: The structure of the proposed model: (a) the Bi-Directional Selective Encoding with Template model (BiSET) and (b) the bi-directional selective layer. umn of S by softmax, giving rise to two new matrices S and S. After that, the Dynamic Coattention Network (DCN) attention is applied to compute the bi-directional attention: A = S · ht and B = S · S T · hs, where A denotes article-totemplate attention and B is template-to-article attention. Finally, we obtain the template-aware article representation {zs i }M i=1: zs i = [hs i; Ai; hs i ⊗Ai; hs i ⊗Bi] (10) 2.4 BiSET Inspired by the research in machine reading comprehension (Seo et al., 2017) and selective mechanism (Zhou et al., 2017), we propose a novel Bi-directional Selective Encoding with Template (BiSET) model for abstractive sentence summarization. The core idea behind BiSET is to involve templates to assist with article representation and summary generation. As shown in Figure 2, BiSET contains two selective gates: Template-to-Article (T2A) gate and Article-toTemplate (A2T) gate. The role of T2A is to use a template to filter the source article representation: gi = σ(Wshhs i + Wthht + bs) (11) hg i = hs i ⊗gi (12) where ht is the concatenation of the last forward hidden state, −→ ht n, and the first backward hidden state, ←− ht 1, of the template. On the other hand, the purpose of A2T is to control the proportion of hg in the final article representation. We assume the source article is credible and use its representation hs together with ht to calculate a confidence degree, where hs is obtained in a similar way as ht. The confidence degree d is computed by: d = σ((hs)T Wdht + bd) (13) The final source article representation is calculated as the weighted sum of hs i and hg i : zs i = dhg i + (1 −d)hs i (14) which allows a flexible manner for template incorporation and helps to resist errors when lowquality templates are given. The decoder layer. This layer includes an ordinary RNN decoder (Luong et al., 2015). At each time step t, the decoder reads the word wt−1 and hidden state hc t−1 generated in the previous step, and gives a new hidden state for the current step: hc t = RNN(wt−1, hc t−1) (15) where the hidden state is initialized with the original source article representation, hs. We then compute the attention between hc t and the final article representation zs to obtain a context vector ct: εt,i = (zs i )T Wchc t (16) αt,i = exp(εt,i) PM i=1 exp(εt,i) (17) ct = M X i=1 αt,izs i (18) After that, a simple concatenation layer is used to combine the hidden state hc t and the context vector ct into a new hidden state ha t : ha t = tanh(Wha[ct; hc t]) (19) which will be mapped to a new representation of 2157 vocabulary size and fed through a softmax layer to output the target word distribution: p(wt|w1, ..., wt−1) = softmax(Wpha t ) (20) 2.5 Training The Retrieve module involves an unsupervised process with traditional indexing and retrieval techniques. For Fast Rerank, since there is no ground truth available, we use ROUGE-13 (Lin and Hovy, 2003) to evaluate the saliency of a candidate template with respect to the gold summary of current source article. Therefore, the loss function is defined as: Lr(θ) = −1 N N X i=1 [s∗log s + (1 −s∗) log(1 −s)] (21) where s is a score predicted by Equation 9, and N is the product of the training set size, D, and the number of retrieved templates for each article. For the BiSET module, the loss function is chosen as the negative log-likelihood between the generated summary, w, and the true summary, w∗: Lw(θ) = −1 D D X i=1 L X j=1 log p(w∗(i) j |w(i) j−1, x(i), y(i)) (22) where L is the length of the true summary, θ contains all the trainable variables, and x and y denote the source article and the template, respectively. 3 Experiments In this section, we introduce our evaluations on a standard dataset. 3.1 Dataset and Implementation The dataset used for evaluation is Annotated English Gigaword (Napoles et al., 2012), a parallel corpus formed by pairing the first sentence of an article with its headline. For a fair comparison, we use the version preprocessed by Rush et al. (2015)4 as previous work. During training, both the Fast Rerank and BiSET modules have a batch size of 64 with the Adam optimizer (Kingma and Ba, 2015). We also apply grad clipping (Pascanu et al., 2013) with a 3We also tried ROUGE-2 and ROUGE-L, but ROUGE-1 shows to be more suitable. 4https://github.com/harvardnlp/sent-summary range of [-5,5]. The differences of the two modules in settings are listed below. Fast Rerank. We set the size of word embeddings to 300, the convolution encoder block number to 1, and the kernel size of CNN to 3. The weights are shared between the article and template encoders. The k of k-max pooling is set to 10. L2 weight decay with λ = 3×10−6 is performed over all trainable variables. The initial learning rate is 0.001 and multiplied by 0.1 every 10K steps. Dropout between layers is applied. BiSET. A two-layer BiLSTM is used as the encoder, and another two-layer LSTM as the decoder. The sizes of word embeddings and LSTM hidden states are both set to 500. We only apply dropout in the LSTM stack with a rate of 0.3. The learning rate is set to 0.001 for the first 50K steps and halved every 10K steps. Beam search with size 5 is applied to search for optimal answers. 3.2 Evaluation Metrics Following previous work (Nallapati et al., 2016; Zhou et al., 2017; Cao et al., 2018a), we use the standard F1 scores of ROUGE-1, ROUGE2 and ROUGE-L (Lin and Hovy, 2003) to evaluate the selected templates and generated summaries, where the official ROUGE script5 is applied. We employ the normalized discounted cumulative gain (NDCG) (J¨arvelin and Kek¨al¨ainen, 2002) from information retrieval to evaluate the Fast Rerank module. 4 Results and Analysis In this section, we report our experimental results with thorough analysis and discussions. 4.1 Performance of Retrieve The Retrieve module is intended to narrow down the search range for a best template. We evaluated this module by considering three types of templates: (a) Random means a randomly selected summary from the training corpus; (b) Retrievetop is the highest-ranked summary by Retrieve; (c) N-Optimal means among the N top search results, the template is specified as the summary with largest ROUGE score with gold summary. As the results show in Table 1, randomly selected templates are totally irrelevant and unhelpful. When they are replaced by the Retrieve-top 5The ROUGE evaluation option: -m -n 2 -w 1.2 2158 Figure 3: Quality of candidate templates under different ranges. templates, the results improve apparently, demonstrating the relatedness of top-ranked summaries to gold summaries. Furthermore, when the NOptimal templates are used, additional improvements can be observed as N grows. This trend is also confirmed by Figure 3, in which the ROUGE scores increase before 30 and stabilize afterwards. These results suggest that the ranges given by Retrieve indeed help to find quality templates. Type ROUGE-1 ROUGE-2 ROUGE-L Random 2.58 0.00 2.48 Retrieve-top 23.46 7.67 20.94 5-Optimal 32.69 11.74 28.71 10-Optimal 35.90 13.32 31.42 15-Optimal 37.82 16.79 34.08 20-Optimal 38.92 17.72 34.94 30-Optimal 40.49 19.01 36.10 Table 1: Performance of different types of templates. 4.2 Fast Rerank As mentioned before, the role of Fast Rerank is to re-rank the initial search results and return a best template for summarization. To examine the effect of this module, we studied its ranking quality under different ranges as in Section 4.1. The original rankings by Retrieve are presented for comparison with the NDCG metric. We regard the ROUGE2 score of each candidate template with the reference summary as the ground truth. As shown in Figure 4, Fast Rerank consistently provides enhanced rankings over the original. 4.3 Interaction Approaches In Section 2.3, we also explored three alternative approaches to integrating an article with its template. The results are shown in Table 2, from which we can note that none of these approaches help yield satisfactory performance. Even though DCN Figure 4: Quality of rankings given by Fast Rerank. Attention works impressively in machine reading comprehension, it performs even worse in this task than the simple concatenation. We conjecture the reason is that the DCN Attention attempts to fuse the template information into an article as in machine reading comprehension, rather than selects key information from the two to form an enhanced article representation. Interaction method ROUGE-1 ROUGE-2 ROUGE-L Concatenation 32.26 15.30 30.19 Concate+multi self-att 33.15 15.93 31.21 DCN Attention 31.53 13.77 27.96 Bi-selective layer 39.11 19.78 36.87 Table 2: Results of different interaction approaches. 4.4 BiSET The overall performance of all the studied models is shown in Table 3. The results show that our model significantly outperforms all the baseline models and sets a new state of the art for abstractive sentence summarization. To evaluate the impact of templates on our model, we also implemented BiSET with two other types of templates: randomly-selected templates and best templates identified by Fast Rank under different ranges. As shown in Table 4, the performance of our model improves constantly with the improvement of template quality (larger ranges lead to better chances for good templates). Even with randomly-selected templates, our model still works with stable performance, demonstrating its robustness. 4.5 Speed Comparison Our model is designed for both accuracy and efficiency. Due to the parallelizable nature of CNN, the Fast Rerank module only takes about 30 minutes for training and 3 seconds for inference on 2159 Model ROUGE-1 ROUGE-2 ROUGE-L ABS‡ (Rush et al., 2015) 29.55 11.32 26.42 ABS+‡ (Rush et al., 2015) 29.78 11.89 26.97 RAS-Elman‡ (Chopra et al., 2016) 33.78 15.97 31.15 Featseq2seq‡ (Nallapati et al., 2016) 32.67 15.59 30.64 Open-NMT‡ (Klein et al., 2017) 34.07 16.35 31.78 SEASS‡ (Zhou et al., 2017) 36.15 17.54 33.63 S2S+CGU‡ (Lin et al., 2018) 36.30 18.00 33.80 FTSum‡ (Cao et al., 2018b) 37.27 17.65 34.24 R3Sum‡ (Cao et al., 2018a) 37.04 19.03 34.46 BiSET 39.11 19.78 36.87 Table 3: Performance of all the models, where results marked with ‡ are taken from the corresponding papers. Template Type ROUGE-1 ROUGE-2 ROUGE-L Random 33.85 15.83 31.14 5-rerank 37.69 18.62 34.38 10-rerank 38.34 19.35 34.97 20-rerank 38.89 19.64 36.67 30-rerank 39.11 19.78 36.87 Table 4: Performance of BiSET with different types of templates, where Random means randomly-selected templates, and N-rerank denotes the best templates reranked by Fast Rerank under range N. the whole test set. The BiSET model takes about 8 hours for training (GPU:GTX 1080), 6 times faster than R3Sum (Cao et al., 2018a)6. 4.6 Ablation Study The purpose of this study is to examine the roles of the bi-directional selective layer and its two gates. Firstly, we removed the selective layer and replaced it with the direct concatenation of an article with its template representation. As the results show in Table 5, the model performs even worse than some ordinary sequence-to-sequence models in Table 3. The reason might be that templates would overwhelm the original article representations and become noise after concatenation. Then, we removed the Template-to-Article (T2A) gate, and as a result the model shows a great decline in performance, indicating the importance of templates in article representations. Finally, when we removed the Article-to-Template (A2T) gate, whose role is to control the weight of T2A in article representations, only a small performance decline is observed. This may suggest that the T2A gate alone can already capture most of the important article information, while A2T plays some supplemental role. 6It takes about 2 days for training. Model ROUGE-1 ROUGE-2 ROUGE-L Concatenation 32.26 15.30 30.19 BiSET without T2A 34.51 16.55 31.17 BiSET without A2T 39.02 19.21 36.02 BiSET(full) 39.11 19.78 36.87 Table 5: ROUGE F1 scores of ablated models. 4.7 Human Evaluation We then carried out a human evaluation to evaluate the generated summaries from another perspective. Our evaluators include 8 graduate students and 4 senior undergraduates, while the dataset is 100 randomly-selected articles from the test set. Each sample in this dataset also includes: 1 reference summary, 5 summaries generated by Open-NMT7 (Klein et al., 2017), R3Sum8 (Cao et al., 2018a) and BiSET under three settings, respectively, and 3 randomly-selected summaries for trapping. We asked the evaluators to independently rate each summary on a scale of 1 to 5, with respect to its quality in informativity, conciseness, and readability. While collecting the results, we rejected the samples in which more than half evaluators rate the informativity of the reference summary below 3. We also rejected the samples in which the informativity of a randomly-selected summary is scored higher than 3. Finally, we obtained 43 remaining samples and calculated an average score for each aspect. As the results show in Table 6, our model not only performs much better than the baselines, it also shows quite comparable performance with the reference summaries. Model Info Concise Read R3Sum 3.30 3.83 3.90 Open-NMT 3.26 3.69 3.86 BiSET(random template) 3.09 3.69 3.71 BiSET(without A2T) 3.24 3.75 3.72 BiSET(best template) 3.35 3.98 3.93 Reference 3.55 3.91 3.89 Table 6: Results of human evaluation. In Table 7 we present two real examples, which show the templates found by our model are indeed related to the source articles, and with their aid, our model succeeds to keep the main content of the source articles for summarization while discarding unrelated words like ‘US’ and ‘Olympic Games’. 7https://github.com/OpenNMT/OpenNMT-py 8http://www4.comp.polyu.edu.hk/˜cszqcao/ 2160 Source factory orders for manufactured goods rose #.# percent in September, the commerce department said here Thursday. Ref September factory orders up #.# percent. Temp January factory orders in US up #.# percent. BiSET factory orders up #.# percent in September. Source some #.# billion people worldwide are expected to watch Germany face Costa Rica on television at the opening match of football’s World Cup, German public broadcaster zdf said Thursday. Ref #.# billion tv viewers expected for opening World Cup match. Temp billions around world watch the Olympic Games opening ceremony. BiSET #.# billions around world expected to watch World Cup. Table 7: Examples of the generated templates and summaries by our model. ‘#’ refers to masked numbers. 5 Related Work Abstractive sentence summarization, a task analogous to headline generation or sentence compression, aims to generate a brief summary given a short source article. Early studies in this problem mainly focus on statistical or linguistic-rule-based methods, including those based on extractive and compression (Jing and McKeown, 2000; Knight and Marcu, 2002; Clarke and Lapata, 2010), templates (Zhou and Hovy, 2004) and statistical machine translation (Banko et al., 2000). The advent of large-scale summarization corpora accelerates the development of various neural network methods. Rush et al. (2015) first applied an attention-based sequence-to-sequence model for abstractive summarization, which includes a convolutional neural network (CNN) encoder and a feed-forward network decoder. Chopra et al. (2016) replaced the decoder with a recurrent neural network (RNN). Nallapati et al. (2016) further changed the sequence-to-sequence model to a fully RNN-based model. Besides, Gu et al. (2016) found that this task benefits from copying words from the source articles and proposed the CopyNet correspondingly. With a similar purpose, Gulcehre et al. (2016) proposed to use a switch gate to control when to copy from the source article and when to generate from the vocabulary. Zhou et al. (2017) employed a selective gate to filter out unimportant information when encoding. Some other work attempts to incorporate external knowledge for abstractive summarization. For example, Nallapati et al. (2016) proposed to enrich their encoder with handcrafted features such as named entities and part-of-speech (POS) tags. Guu et al. (2018) also attempted to encode humanwritten sentences to improve neural text generation. Similar to our work, Cao et al. (2018a) proposed to retrieve a related summary from the training set as soft template to assist with the summarization. However, their approach tends to oversimplify the role of the template, by directly concatenating a template after the source article encoding. In contrast, our bi-directional selective mechanism exhibits a novel attempt to selecting key information from the article and the template in a mutual manner, offering greater flexibility in using the template. 6 Conclusion In this paper, we presented a novel Bi-directional Selective Encoding with Template (BiSET) model for abstractive sentence summarization. To counteract the verbosity and insufficiency of training data, we proposed to retrieve high-quality existing summaries as templates to assist with source article representations through an ingenious bidirectional selective layer. The enhanced article representations are expected to contribute towards better summarization eventually. We also developed the corresponding retrieval and re-ranking modules for obtaining quality templates. Extensive evaluations were conducted on a standard benchmark dataset and experimental results show that our model can quickly pick out high-quality templates from the training corpus, laying key foundation for effective article representations and summary generations. The results also show that our model outperforms all the baseline models and sets a new state of the art. An ablation study validates the role of the bi-directional selective layer, and a human evaluation further proves that our model can generate informative, concise, and readable summaries. 7 Acknowledgement The paper was partially supported by the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (No.2017ZT07X355) and the Key R&D Program of Guangdong Province (2019B010120001). 2161 References Michele Banko, Vibhu O. Mittal, and Michael J. Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 318–325. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018a. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 152–161. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018b. Faithful to the original: Fact aware neural abstractive summarization. In Thirty-Second AAAI Conference on Artificial Intelligence. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2358–2367. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98. James Clarke and Mirella Lapata. 2010. Discourse constraints for document compression. Computational Linguistics, 36(3):411–441. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 933–941. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1631–1640. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 140–149. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association of Computational Linguistics, 6:437–450. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. Hongyan Jing and Kathleen R. McKeown. 2000. Cut and paste based text summarization. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 178–185. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 655–665. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. Proceedings of ACL 2017, System Demonstrations, pages 67–72. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence, 139(1):91–107. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39. Chin Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 71–78. Junyang Lin, Sun Xu, Shuming Ma, and Qi Su. 2018. Global encoding for abstractive summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 163–169. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290. 2162 Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Joint Workshop on Automatic Knowledge Base Construction and Web-Scale Knowledge Extraction, pages 95–100. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310–1318. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Liang Zhou and Eduard Hovy. 2004. Templatefiltered headline summarization. Text Summarization Branches Out. Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective encoding for abstractive sentence summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1095– 1104.
2019
207
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2163–2174 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2163 Neural Keyphrase Generation via Reinforcement Learning with Adaptive Rewards Hou Pong Chan1, Wang Chen1, Lu Wang2, and Irwin King1 1The Chinese University of Hong Kong, Shatin, N.T., Hong Kong 2Northeastern Univesity, Boston, MA, USA 1{hpchan, wchen, king}@cse.cuhk.edu.hk [email protected] Abstract Generating keyphrases that summarize the main points of a document is a fundamental task in natural language processing. Although existing generative models are capable of predicting multiple keyphrases for an input document as well as determining the number of keyphrases to generate, they still suffer from the problem of generating too few keyphrases. To address this problem, we propose a reinforcement learning (RL) approach for keyphrase generation, with an adaptive reward function that encourages a model to generate both sufficient and accurate keyphrases. Furthermore, we introduce a new evaluation method that incorporates name variations of the ground-truth keyphrases using the Wikipedia knowledge base. Thus, our evaluation method can more robustly evaluate the quality of predicted keyphrases. Extensive experiments on five real-world datasets of different scales demonstrate that our RL approach consistently and significantly improves the performance of the state-of-the-art generative models with both conventional and new evaluation methods. 1 Introduction The task of keyphrase generation aims at predicting a set of keyphrases that convey the core ideas of a document. Figure 1 shows a sample document and its keyphrase labels. The keyphrases in red color are present keyphrases that appear in the document, whereas the blue ones are absent keyphrases that do not appear in the input. By distilling the key information of a document into a set of succinct keyphrases, keyphrase generation facilitates a wide variety of downstream applications, including document clustering (Hammouda et al., 2005; Hulth and Megyesi, 2006), opinion mining (Berend, 2011), and summarization (Zhang et al., 2004; Wang and Cardie, 2013). Document: DCE MRI data analysis for cancer area classification. The paper aims at improving the support of medical researchers in the context of in-vivo cancer imaging… The proposed approach is based on a three-step procedure: i) robust feature extraction from raw time-intensity curves, ii) voxel segmentation, and iii) voxel classification based on a learning-by-example approach… Finally, in the third step, a support vector machine (SVM) is trained to classify voxels according to the labels obtained by the clustering phase… Keyphrase labels: svm; dce mri; cluster analysis; classification catSeqD predictions: cancer area classification; support vector machine catSeqD-𝟐𝑹𝑭𝟏predictions: dce mri; cancer area classification; support vector machine; image segmentation; morphological analysis Enriched keyphrase labels: {svm, support vector machine}; dce mri; cluster analysis; classification Figure 1: Sample document with keyphrase labels and predicted keyphrases. We use red (blue) color to highlight present (absent) keyphrases. The underlined phrases are name variations of a keyphrase label. “catSeqD” is a keyphrase generation model from Yuan et al. (2018). “catSeqD-2RF1” denotes the catSeqD model after being trained by our RL approach. The enriched keyphrase labels are based on our new evaluation method. To produce both present and absent keyphrases, generative methods (Meng et al., 2017; Ye and Wang, 2018; Chen et al., 2018a,b) are designed to apply the attentional encoder-decoder model (Bahdanau et al., 2014; Luong et al., 2015) with copy mechanism (Gu et al., 2016; See et al., 2017) to approach the keyphrase generation task. However, none of the prior models can determine the appropriate number of keyphrases for a document. In reality, the optimal keyphrase count varies, and is dependent on a given document’s content. To that end, Yuan et al. (2018) introduced a training setup in which a generative model can learn to decide the number of keyphrases to predict for a given document and proposed two models. Although they provided a more realistic setup, there still exist two drawbacks. First, models trained under this setup tend 2164 to generate fewer keyphrases than the groundtruth. Our experiments on the largest dataset show that their catSeqD model generates 4.3 keyphrases per document on average, while these documents have 5.3 keyphrase labels on average. Ideally, a model should generate both sufficient and accurate keyphrases. Second, existing evaluation methods rely only on the exact matching of word stems (Porter, 2006) to determine whether a predicted phrase matches a ground-truth phrase. For example, given the document in Figure 1, if a model generates “support vector machine”, it will be treated as incorrect since it does not match the word “svm” given by the gold-standard labels. It is therefore desirable for an evaluation method to consider name variations of a groundtruth keyphrase. To address the first limitation, we design an adaptive reward function, RF1, that encourages a model to generate both sufficient and accurate keyphrases. Concretely, if the number of generated keyphrases is less than that of the groundtruth, we use recall as the reward, which does not penalize the model for generating incorrect predictions. If the model generates sufficient keyphrases, we use F1 score as the reward, to balance both recall and precision of the predictions. To optimize the model towards this nondifferentiable reward function, we formulate the task of keyphrase generation as a reinforcement learning (RL) problem and adopt the self-critical policy gradient method (Rennie et al., 2017) as the training procedure. Our RL approach is flexible and can be applied to any keyphrase generative model with an encoder-decoder structure. In Figure 1, we show a prediction result of the catSeqD model (Yuan et al., 2018) and another prediction result of the catSeqD model after being trained by our RL approach (catSeqD-2RF1). This example illustrates that our RL approach encourages the model to generate more correct keyphrases. Perhaps more importantly, the number of generated keyphrases also increases to five, which is closer to the ground-truth number (5.3). Furthermore, we propose a new evaluation method to tackle the second limitation. For each ground-truth keyphrase, we extract its name variations from various sources. If the word stems of a predicted keyphrase match the word stems of any name variation of a ground-truth keyphrase, it is treated as a correct prediction. For instance, in Figure 1, our evaluation method enhances the “svm” ground-truth keyphrase with its name variation, “support vector machine”. Thus, the phrase “support vector machine” generated by catSeqD and catSeqD-2RF1 will be considered correct, which demonstrates that our evaluation method is more robust than the existing one. We conduct extensive experiments to evaluate the performance of our RL approach. Experiment results on five real-world datasets show that our RL approach consistently improves the performance of the state-of-the-art models in terms of F-measures. Moreover, we analyze the sufficiency of the keyphrases generated by different models. It is observed that models trained by our RL approach generate more absent keyphrases, which is closer to the number of absent groundtruth keyphrases. Finally, we deploy our new evaluation method on the largest keyphrase generation benchmark, and the new evaluation identifies at least one name variation for 14.1% of the groundtruth keyphrases. We summarize our contributions as follows: (1) an RL approach with a novel adaptive reward function that explicitly encourages the model to generate both sufficient and accurate keyphrases; (2) a new evaluation method that considers name variations of the keyphrase labels; and (3) the new state-of-the-art performance on five real-world datasets in a setting where a model is able to determine the number of keyphrases to generate. This is the first work to study RL approach on the keyphrase generation problem. 2 Related Work 2.1 Keyphrase Extraction and Generation Traditional extractive methods select important phrases from the document as its keyphrase predictions. Most of them adopt a two-step approach. First, they identify keyphrase candidates from the document by heuristic rules (Wang et al., 2016; Le et al., 2016). Afterwards, the candidates are either ranked by unsupervised methods (Mihalcea and Tarau, 2004; Wan and Xiao, 2008) or supervised learning algorithms (Medelyan et al., 2009; Witten et al., 1999; Nguyen and Kan, 2007a). Other extractive methods apply sequence tagging models (Luan et al., 2017; Gollapalli et al., 2017; Zhang et al., 2016) to identify keyphrases. However, extractive methods cannot produce absent keyphrases. 2165 To predict both present and absent keyphrases for a document, Meng et al. (2017) proposed a generative model, CopyRNN, which is composed of an attentional encoder-decoder model (Bahdanau et al., 2014) and a copy mechanism (Gu et al., 2016). Lately, multiple extensions to CopyRNN were also presented. CorrRNN (Chen et al., 2018a) incorporates the correlation among keyphrases. TG-Net (Chen et al., 2018b) exploits the title information to learn a better representation for an input document. Chen et al. (2019) leveraged keyphrase extraction models and external knowledge to improve the performance of keyphrase generation. Ye and Wang (2018) considered a setting where training data is limited, and proposed different semi-supervised methods to enhance the performance. All of the above generative models use beam search to over-generate a large number of keyphrases and select the topk predicted keyphrases as the final predictions, where k is a fixed number. Recently, Yuan et al. (2018) introduced a setting where a model has to determine the appropriate number of keyphrases for an input document. They proposed a training setup that empowers a generative model to generate variable numbers of keyphrases for different documents. Two new models, catSeq and catSeqD, were described. Our work considers the same setting and proposes an RL approach, which is equipped with adaptive rewards to generate sufficient and accurate keyphrases. To our best knowledge, this is the first time RL is used for keyphrase generation. Besides, we propose a new evaluation method that considers name variations of the keyphrase labels, a novel contribution to the state-of-the-art. 2.2 Reinforcement Learning for Text Generation Reinforcement learning has been applied to a wide array of text generation tasks, including machine translation (Wu et al., 2016; Ranzato et al., 2015), text summarization (Paulus et al., 2018; Wang et al., 2018), and image/video captioning (Rennie et al., 2017; Liu et al., 2017; Pasunuru and Bansal, 2017). These RL approaches lean on the REINFORCE algorithm (Williams, 1992), or its variants, to train a generative model towards a non-differentiable reward by minimizing the policy gradient loss. Different from existing work, our RL approach uses a novel adaptive reward function, which combines the recall and F1 score via a hard gate (if-else statement). 3 Preliminary 3.1 Problem Definition We formally define the problem of keyphrase generation as follows. Given a document x, output a set of ground-truth keyphrases Y = {y1, y2, . . . , y|Y|}. The document x and each ground-truth keyphrase yi are sequences of words, i.e., x = (x1, . . . , xlx), and yi = (yi 1, . . . , yi lyi), where lx and lyi denote the numbers of words in x and yi respectively. A keyphrase that matches any consecutive subsequence of the document is a present keyphrase, otherwise it is an absent keyphrase. We use Yp = {yp,1, yp,2, . . . , yp,|Yp|} and Ya = {ya,1, ya,2, . . . , ya,|Ya|} to denote the sets of present and absent ground-truth keyphrases, respectively. Thus, the ground-truth keyphrases set can be expressed as Y = Yp ∪Ya. 3.2 Keyphrase Generation Model In this section, we describe the attentional encoder-decoder model (Bahdanau et al., 2014) with copy mechanism (See et al., 2017), which is the backbone of our implementations of the baseline generative models. Our training setup. For each documentkeyphrases pair (x, Y), we join all the keyphrases in Y into one output sequence, y = yp,1 ≀yp,2 ≀ . . . ≀yp,|Yp| ⋄ya,1 ≀ya,2 ≀. . . ≀ya,|Ya|, where ⋄is a special token that indicates the end of present keyphrases, and ≀is a delimiter between two consecutive present keyphrases or absent keyphrases. Using such (x, y) samples as training data, the encoder-decoder model can learn to generate all the keyphrases in one output sequence and determine the number keyphrases to generate. The only difference with the setup in Yuan et al. (2018) is that we use ⋄to mark the end of present keyphrases, instead of using ≀. Attentional encoder-decoder model. We use a bi-directional Gated-Recurrent Unit (GRU) (Cho et al., 2014) as the encoder. The encoder’s i-th hidden state is hi = [−→ h i; ←− h i] ∈Rdh. A single-layered GRU is adopted as the decoder. At decoding step t, the decoder hidden state is st = GRU(et−1, st−1) ∈Rds, where et−1 is the embedding of the (t −1)-th predicted word. Then we apply the attention layer in (Bahdanau 2166 et al., 2014) to compute an attention score at,i for each of the word xi in the document. The attention scores are next used to compute a context vector h∗ t for the document. The probability of predicting a word yt from a predefined vocabulary V is defined as PV (yt) = softmax(WV (WV ′[st; h∗ t ])). In this paper, all the W terms represent trainable parameters and we omit the bias terms for brevity. Pointer-generator network. To alleviate the out-of-vocabulary (OOV) problem, we adopt the copy mechanism from See et al. (2017). For each document x, we build a dynamic vocabulary Vx by merging the predefined vocabulary V and all the words that appear in x. Then, the probability of predicting a word yt from the dynamic vocabulary Vx is computed as PVx(yt) = pgenPV (yt) + (1 −pgen)PC(yt), where PC(yt) = P i:xi=yt at,i is the copy distribution and pgen = sigmoid(Wg[h∗ t ; st; et−1]) ∈[0, 1] is a soft gate to select between generating a word from the vocabulary V and copying a word from the document. Maximum likelihood training. We use θ to denote all model parameters and y1:t−1 to denote a sequence (y1, ..., yt−1). Previous work learns the parameters by maximizing the log-likelihood of generating the ground-truth output sequence y, defined as follows, L(θ) = − Ly X t=1 log PVx(yt|y1:t−1, x; θ). (1) 4 Reinforcement Learning Formulation We formulate the task of keyphrase generation as a reinforcement learning problem, in which an agent interacts with an environment in discrete time steps. At each time step t = 1, . . . , T, the agent produces an action (word) ˆyt sampled from the policy π(ˆyt|ˆy1:t−1, x; θ), where ˆy1:t−1 denotes the sequence generated by the agent from step 1 to t −1. After that, the environment gives a reward rt(ˆy1:t, Y) to the agent and transits to the next step t+1 with a new state ˆst+1 = (ˆy1:t, x, Y). The policy of the agent is a keyphrase generation model, i.e., π(.|ˆy1:t−1, x; θ) = PVx(.|ˆy1:t−1, x; θ). To improve the sufficiency and accuracy of both present keyphrases and absent keyphrases generated by the agent, we give separate reward signals to present keyphrase predictions and absent keyphrase predictions. Hence, we divide our RL problem into two different stages. In the first stage, we evaluate the agent’s performance on extracting present keyphrases. Once the agent generates the ‘⋄’ token, we denote the current time step as T p, the environment computes a reward using our adaptive reward function RF1 by comparing the generated keyphrases in ˆy1:T P with the ground-truth present keyphrases Yp, i.e., rT P (ˆy1:T P , Y) = RF1(ˆy1:T P , Yp). Then we enter the second stage, where we evaluate the agent’s performance on generating absent keyphrases. Upon generating the EOS token, the environment compares the generated keyphrases in ˆyT P +1:T with the ground-truth absent keyphrases Ya and computes a reward rT (ˆy1:T , Y) = RF1(ˆyT p+1:T , Ya). After that, the whole process terminates. The reward to the agent is 0 for all other time steps, i.e., rt(ˆy1:t, Y) = 0 for all t /∈{T p, T}. Let return Rt(ˆy, Y) be the sum of future reward starting from time step t, i.e., Rt(ˆy, Y) = PT τ=t rτ(ˆy1:τ, Y), where ˆy denotes the complete sequence generated by the agent, i.e., ˆy = ˆy1:T . We then simplify the expression of return into: Rt =      RF1(ˆy1:T P , Yp)+ RF1(ˆyT P +1:T , Ya) if 1 ≤t ≤T p, RF1(ˆyT P +1:T , Ya) if T p < t ≤T. (2) The goal of the agent is to maximize the expected initial return Eˆy∼π(.|x;θ)R1(ˆy, Y), where R1(ˆy, Y) = RF1(ˆy1:T P , Yp) + RF1(ˆyT P +1:T , Ya). Adaptive reward function. To encourage the model to generate sufficient and accurate keyphrases, we define our adaptive reward function RF1 as follows. First, let N be the number of predicted keyphrases, and G be the number of ground-truth keyphrases, then RF1 = ( recall if N < G, F1 otherwise. (3) If the model generates insufficient number of keyphrases, the reward will be the recall of the predictions. Since generating incorrect keyphrases will not decrease the recall, the model is encouraged to produce more keyphrases to boost the reward. If the model generates a sufficient number of keyphrases, the model should be discouraged from over-generating incorrect keyphrases, thus 2167 the F1 score is used as the reward, which incorporates the precision of the predicted keyphrases. REINFORCE. To maximize the expected initial return, we define the following loss function: L(θ) = −Eˆy∼π(.|x;θ)[R1(ˆy, Y)]. (4) According to the REINFORCE learning rule in Williams (1992), the expected gradient of the initial return can be expressed as ∇θL(θ) = −Eˆy∼π(.|x;θ)[PT t=1 ∇θ log π(ˆyt|ˆy1:t−1, x; θ)Rt]. In practice, we approximate the above expectation using a sample ˆy ∼π(.|x; θ). Moreover, we subtract the return Rt by a baseline Bt, which is a standard technique in RL to reduce the variance of the gradient estimator (Sutton and Barto, 1998). In theory, the baseline can be any function that is independent of the current action yt. The gradient ∇θL is then estimated by: ∇θL ≈− T X t=1 ∇θ log π(ˆyt|ˆy1:t−1, x; θ)(Rt −Bt). (5) Intuitively, the above gradient estimator increases the generation probability of a word ˆyt if its return Rt is higher than the baseline (Rt −Bt > 0). Self-critical sequence training. The main idea of self-critical sequence training (Rennie et al., 2017) is to produce another sequence ¯y from the current model using greedy search algorithm, then use the initial return obtained by ¯y as the baseline. The interpretation is that the gradient estimator increases the probability of a word if it has an advantage over the greedily decoded sequence. We apply this idea to our RL problem, which has two different stages. When in the present (absent) keyphrase prediction stage, we want the baseline Bt to be the initial return obtained by the greedy sequence ¯y in its present (absent) keyphrase prediction stage. Thus, we first let ¯T P and ¯T be the decoding steps where the greedy search algorithm generates the ⋄ token and EOS token, respectively. We then define the baseline1 as: Bt =      RF1(¯y1: ¯T P , Yp)+ RF1(¯y ¯T P +1: ¯T , Ya) if 1 ≤t ≤T p, RF1(¯y ¯T P +1: ¯T , Ya) if T p < t ≤T. (6) With Eqs. (5) and (6), we can simply perform gradient descent to train a generative model. 1The value of Bt only depends on whether ‘⋄’ exists in ˆy1:t−1, hence it does not depend on the current action ˆyt. Ground-truth Extracted variations pca principal component analysis ssd solid state drive op amps operational amplifier hackday hackathon mobile ad hoc networks manet electronic commerce e commerce Table 1: Examples of name variations extracted by our method for keyphrase labels on the KP20k dataset. 5 New Evaluation Method Our new evaluation method maintains a set of name variations ˜yi for each ground-truth keyphrase yi of x. If a predicted keyphrase ˆyi matches any name variation of a ground-truth keyphrase, then ˆyi is considered a correct prediction. A ground-truth keyphrase is also its own name variation. If there are multiple ground-truth keyphrases in x that have the same name variations set, we will only keep one of them. In our evaluation method, the name variation set of a ground-truth keyphrase may contain both present phrases and absent phrases. In such a case, a ground-truth keyphrase can be matched by a present predicted keyphrase or an absent predicted keyphrase. Thus, this ground-truth keyphrase should be treated as both a present ground-truth keyphrase and an absent ground-truth keyphrase, as shown in the following definition. Definition 5.1. Present (Absent) ground-truth keyphrase. If a name variation set ˜yi of a groundtruth keyphrase yi only consists of present (absent) keyphrases, then yi is a present (absent) ground-truth keyphrase. Otherwise, yi is both a present ground-truth keyphrase and an absent ground-truth keyphrase, i.e., yi ∈Yp and yi ∈ Ya. 5.1 Name Variation Extraction We extract name variations of a ground-truth keyphrase from the following sources: acronyms in the ground-truths, Wikipedia disambiguation pages, and Wikipedia entity titles. The later two sources have also been adopted by entity linking methods (Zhang et al., 2010, 2011) to find name variations. Some examples of extracted name variations are shown in Table 1. Acronyms in the ground-truths. We found that some of the ground-truth keyphrases have included an acronym at the end of the string, e.g.,“principal component analysis (pca)”. Thus, we adopt the following simple rule to extract an 2168 acronym from a ground-truth keyphrase. If a ground-truth keyphrase ends with a pair of parentheses, we will extract the phrase inside the pair, e.g., “pca”, as one of the name variations. Wikipedia entity titles. An entity page in Wikipedia provides the information of an entity, and the page title represents an unambiguous name variation of that entity. For example, a search for “solid state disk” on Wikipedia will be redirected to the entity page of “solid state drive”. In such case, the title “solid state drive” is a name variation of “solid state disk”. Wikipedia disambiguation pages. A disambiguation page helps users find the correct entity page when the input query refers to more than one entity in Wikipedia. It contains a list of entity pages that the query refers to. For example, a keyphrase of “ssd” may refer to the entity “solid state drive” or “sterol-sensing domain” in Wikipedia. To find the correct entity page for a keyphrase, we iterate through this list of possible entities. If an entity title is present in a document, we assume it is the entity that the keyphrase refers to. For example, if a document x contains “solid state drive”, we will assume that the keyphrase “ssd” refers to this entity. 6 Experiments We first report the performance of different models using the conventional evaluation method. Afterwards, we present the results based on our new evaluation method. All experiments are repeated for three times using different random seeds and the averaged results are reported. The source code and the enriched evaluation set are released to the public2. Sample output is shown in Figure 1. 6.1 Datasets We conduct experiments on five scientific article datasets, including KP20k (Meng et al., 2017), Inspec (Hulth, 2003), Krapivin (Krapivin et al., 2009), NUS (Nguyen and Kan, 2007b), and SemEval (Kim et al., 2010). Each sample from these datasets consists of the title, abstract, and keyphrases of a scientific article. We concatenate the title and abstract as an input document, and use the assigned keyphrases as keyphrase labels. Following the setup in (Meng et al., 2017; Yuan et al., 2018; Chen et al., 2018b), we use the training set 2Source code and evaluation set are available at https://github.com/kenchan0226/keyphrase-generation-rl of the largest dataset, KP20k, for model training and the testing sets of all five datasets to evaluate the performance of a generative model. From the training set of KP20k, we remove all articles that are duplicated in itself, either in the KP20k validation set, or in any of the five testing sets. After the cleanup, the KP20k dataset contains 509,818 training samples, 20,000 validation samples, and 20,000 testing samples. 6.2 Evaluation Metrics The performance of a model is typically evaluated by comparing the top k predicted keyphrases with the ground-truth keyphrases. The evaluation cutoff k can be either a fixed number or a variable. Most previous work (Meng et al., 2017; Ye and Wang, 2018; Chen et al., 2018a,b) adopted evaluation metrics with fixed evaluation cutoffs, e.g., F1@5. Recently, Yuan et al. (2018) proposed a new evaluation metric, F1@M, which has a variable evaluation cutoff. F1@M compares all the keyphrases predicted by the model with the ground-truth to compute an F1 score, i.e., k = number of predictions. It can also be interpreted as the original F1 score with no evaluation cutoff. We evaluate the performance of a model using a metric with a variable cutoff and a metric with a fixed cutoff, namely, F1@M and F1@5. Marco average is deployed to aggregate the evaluation scores for all testing samples. We apply Porter Stemmer before determining whether two phrases are matched. Our implementation of F1@5 is different from that of Yuan et al. (2018). Specifically, when computing F1@5, if a model generates less than five predictions, we append random wrong answers to the prediction until it reaches five predictions3. The rationale is to avoid producing similar F1@5 and F1@M, when a model (e.g., catSeq) generates less than five keyphrases, as shown in the Table 2 of Yuan et al. (2018). 6.3 Baseline and Deep Reinforced Models We train four baseline generative models using maximum-likelihood loss. These models include catSeq, catSeqD (Yuan et al., 2018), catSeqCorr (Chen et al., 2018a), and catSeqTG (Chen et al., 2018b). For all baselines, we use the method in Yuan et al. (2018) to prepare the training data, by concatenating all keyphrases 3The implementation in Yuan et al. (2018) sets F1@5 = F1@M for such samples. 2169 Model Inspec Krapivin NUS SemEval KP20k F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 catSeq 0.262 0.225 0.354 0.269 0.397 0.323 0.283 0.242 0.367 0.291 catSeqD 0.263 0.219 0.349 0.264 0.394 0.321 0.274 0.233 0.363 0.285 catSeqCorr 0.269 0.227 0.349 0.265 0.390 0.319 0.290 0.246 0.365 0.289 catSeqTG 0.270 0.229 0.366 0.282 0.393 0.325 0.290 0.246 0.366 0.292 catSeq-2RF1 0.300 0.250 0.362 0.287 0.426 0.364 0.327 0.285 0.383 0.310 catSeqD-2RF1 0.292 0.242 0.360 0.282 0.419 0.353 0.316 0.272 0.379 0.305 catSeqCorr-2RF1 0.291 0.240 0.369 0.286 0.414 0.349 0.322 0.278 0.382 0.308 catSeqTG-2RF1 0.301 0.253 0.369 0.300 0.433 0.375 0.329 0.287 0.386 0.321 Table 2: Results of present keyphrase prediction on five datasets. Suffix “-2RF1” denotes that a model is trained by our reinforcement learning approach. into one output sequence. With this setup, all baselines can determine the number of keyphrases to generate. The catSeqCorr and catSeqTG models are the CorrRNN (Chen et al., 2018a) and TGNet (Chen et al., 2018b) models trained under this setup, respectively. For the reinforced models, we follow the method in Section 3.2 to concatenate keyphrases. We first pre-train each baseline model using maximum-likelihood loss, and then apply our RL approach to train each of them. We use a suffix “2RF1” to indicate that a generative model is finetuned by our RL algorithm, e.g., catSeq-2RF1. 6.4 Implementation Details Following (Yuan et al., 2018), we use greedy search (beam search with beam width 1) as the decoding algorithm during testing. We do not apply the Porter Stemmer to the keyphrase labels in the SemEval testing dataset because they have already been stemmed. We remove all the duplicated keyphrases from the predictions before computing an evaluation score. The following steps are applied to preprocess all the datasets. We lowercase all characters, replace all the digits with a special token ⟨digit⟩, and perform tokenization. Following (Yuan et al., 2018), for each document, we sort all the present keyphrase labels according to their order of the first occurrence in the document. The absent keyphrase labels are then appended at the end of present keyphrase labels. We do not rearrange the order among the absent keyphrases. The vocabulary V is defined as the most frequent 50,002 words, i.e., |V | = 50002. We train all the word embeddings from scratch with a hidden size of 100. The hidden size of encoder dh and the hidden size of decoder ds are both set to 300. The followings are the dimensions of the W terms: WV ∈R|V |×ds, WV ′ ∈Rds×(dh+ds), Wg ∈R1×(dh+ds+100). The encoder bi-GRU has only one layer. The initial state of the decoder GRU is set to [−→ h Lx; ←− h 1]. For all other model parameters of the baseline models, we follow the dimensions specified by their corresponding papers (Yuan et al., 2018; Chen et al., 2018a,b). We initialize all the model parameters using a uniform distribution within the interval [−0.1, 0.1]. During training, we use a dropout rate of 0.1 and gradient clipping of 1.0. For maximum-likelihood training (as well as pretraining), we use the Adam optimization algorithm (Kingma and Ba, 2014) with a batch size of 12 and an initial learning rate of 0.001. We evaluate the validation perplexity of a model for every 4000 iterations. We reduce the learning rate by half if the validation perplexity (ppl) stops dropping for one check-point and stop the training when the validation ppl stops dropping for three contiguous check-points. We also use teachingforcing during the training. For RL training, we use the Adam optimization algorithm (Kingma and Ba, 2014) with a batch size of 32 and an initial learning rate of 0.00005. We evaluate the validation initial return of a model for every 4000 iterations. We stop the training when the validation initial return stops increasing for three contiguous check-points. If the model generates more than one ‘⋄’ segmenter, we will only keep the first one and remove the duplicates. If the model does not generate the ‘⋄’ segmenter, we will manually insert a ‘⋄’ segmenter to the first position of the generated sequence. 6.5 Main Results In this section, we evaluate the performance of present keyphrase prediction and absent keyphrase prediction separately. The evaluation results of different models on predicting present keyphrases are shown in Table 2. We observe that our re2170 Model Inspec Krapivin NUS SemEval KP20k F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 catSeq 0.008 0.004 0.036 0.018 0.028 0.016 0.028 0.020 0.032 0.015 catSeqD 0.011 0.007 0.037 0.018 0.024 0.014 0.024 0.016 0.031 0.015 catSeqCorr 0.009 0.005 0.038 0.020 0.024 0.014 0.026 0.018 0.032 0.015 catSeqTG 0.011 0.005 0.034 0.018 0.018 0.011 0.027 0.019 0.032 0.015 catSeq-2RF1 0.017 0.009 0.046 0.026 0.031 0.019 0.027 0.018 0.047 0.024 catSeqD-2RF1 0.021 0.010 0.048 0.026 0.037 0.022 0.030 0.021 0.046 0.023 catSeqCorr-2RF1 0.020 0.010 0.040 0.022 0.037 0.022 0.031 0.021 0.045 0.022 catSeqTG-2RF1 0.021 0.012 0.053 0.030 0.031 0.019 0.030 0.021 0.050 0.027 Table 3: Results of absent keyphrase prediction on five datasets. Model Present Absent MAE Avg. # MAE Avg. # oracle 0.000 2.837 0.000 2.432 catSeq 2.271 3.781 1.943 0.659 catSeqD 2.225 3.694 1.961 0.629 catSeqCorr 2.292 3.790 1.914 0.703 catSeqTG 2.276 3.780 1.956 0.638 catSeq-2RF1 2.118 3.733 1.494 1.574 catSeqD-2RF1 2.087 3.666 1.541 1.455 catSeqCorr-2RF1 2.107 3.696 1.557 1.409 catSeqTG-2RF1 2.204 3.865 1.439 1.749 Table 4: The abilities of predicting the correct number of keyphrases on the KP20k dataset. MAE denotes the mean absolute error (the lower the better), Avg. # denotes the average number of generated keyphrases per document. inforcement learning algorithm consistently improves the keyphrase extraction ability of all baseline generative models by a large margin. On the largest dataset KP20k, all reinforced models obtain significantly higher F1@5 and F1@M (p < 0.02, t-test) than the baseline models. We then evaluate the performance of different models on predicting absent keyphrases. Table 3 suggests that our RL algorithm enhances the performance of all baseline generative models on most datasets, and maintains the performance of baseline methods on the SemEval dataset. Note that predicting absent keyphrases for a document is an extremely challenging task (Yuan et al., 2018), thus the significantly lower scores than those of present keyphrase prediction. 6.6 Number of Generated Keyphrases We analyze the abilities of different models to predict the appropriate number of keyphrases. All duplicated keyphrases are removed during preprocessing. We first measure the mean absolute error (MAE) between the number of generated keyphrases and the number of groundtruth keyphrases for all documents in the KP20k dataset. We also report the average number of generated keyphrases per document, denoted as Model Present Absent F1@M F1@5 F1@M F1@5 catSeq 0.367 0.291 0.032 0.015 catSeq-RF1 0.380 0.336 0.006 0.003 catSeq-2F1 0.378 0.278 0.042 0.020 catSeq-2RF1 0.383 0.310 0.047 0.024 Table 5: Ablation study on the KP20k dataset. Suffix “-2RF1” denotes our full RL approach. Suffix “2F1” denotes that we replace our adaptive RF1 reward function in the full approach by an F1 reward function. Suffix “-RF1” denotes that we replace the two separate RF1 reward signals in our full approach with only one RF1 reward signal for all the generated keyphrases. “Avg. #”. The results are shown in Table 4, where oracle is a model that always generates the ground-truth keyphrases. The resultant MAEs demonstrate that our deep reinforced models notably outperform the baselines on predicting the number of absent keyphrases and slightly outperform the baselines on predicting the number of present keyphrases. Moreover, our deep reinforced models generate significantly more absent keyphrases than the baselines (p < 0.02, ttest). The main reason is that the baseline models can only generate very few absent keyphrases, whereas our RL approach uses recall as the reward and encourages the model to generate more absent keyphrases. Besides, the baseline models and our reinforced models generate similar numbers of present keyphrases, while our reinforced models achieve notably higher F-measures, implying that our methods generate present keyphrases more accurately than the baselines. 6.7 Ablation Study We conduct an ablation study to further analyze our reinforcement learning algorithm. The results are reported in Table 5. Single Reward vs. Separate Rewards. To verify the effectiveness of separately rewarding present and absent keyphrases, we train the cat2171 Model Present Absent F1@M F1@M F1@M F1@M old new old new catSeq 0.367 0.376 0.032 0.034 catSeqD 0.363 0.372 0.031 0.033 catSeqCorr 0.365 0.375 0.032 0.034 catSeqTG 0.366 0.374 0.032 0.033 catSeq-2RF1 0.383 0.396 0.047 0.054 catSeqD-2RF1 0.379 0.390 0.046 0.052 catSeqCorr-2RF1 0.382 0.393 0.045 0.051 catSeqTG-2RF1 0.386 0.398 0.050 0.056 Table 6: Keyphrase prediction results on the KP20k dataset with our new evaluation method. Seq model using another RL algorithm which only gives one reward for all generated keyphrases without distinguishing present keyphrases and absent keyphrases. We use “catSeq-RF1” to denote such a method. As seen in Table 5, although the performance of catSeq-RF1 is competitive to catSeq-2RF1 on predicting present keyphrases, it yields an extremely poor performance on absent keyphrase prediction. We analyze the cause as follows. During the training process of catSeqRF1, generating a correct present keyphrase or a correct absent keyphrase leads to the same degree of improvement in the return at every time step. Since producing a correct present keyphrase is an easier task, the model tends to generate present keyphrases only. Alternative reward function. We implement a variant of our RL algorithm by replacing the adaptive RF1 reward function with an F1 score function (indicated with a suffix “-2F1” in the result table). By comparing the last two rows in Table 5, we observe that our RF1 reward function slightly outperforms the F1 reward function. 6.8 Analysis of New Evaluation Method We extract name variations for all keyphrase labels in the testing set of KP20k dataset, following the methodology in Section 5. Our method extracts at least one additional name variation for 14.1% of the ground-truth keyphrases. For these enhanced keyphrases, the average number of name variations extracted is 1.01. Among all extracted name variations, 14.1% come from the acronym in the ground-truth, 28.2% from the Wikipedia disambiguation pages, and the remaining 61.6% from Wikipedia entity page titles. We use our new evaluation method to evaluate the performance of different keyphrase generation models, and compare with the existing evaluation method. Table 6 shows that for all generative models, the evaluation scores computed by our method are higher than those computed by prior method. This demonstrates that our proposed evaluation successfully captures name variations of groundtruth keyphrases generated by different models, and can therefore evaluate the quality of generated keyphrases in a more robust manner. 7 Conclusion and Future Work In this work, we propose the first RL approach to the task of keyphrase generation. In our RL approach, we introduce an adaptive reward function RF1, which encourages the model to generate both sufficient and accurate keyphrases. Empirical studies on real data demonstrate that our deep reinforced models consistently outperform the current state-of-the-art models. In addition, we propose a novel evaluation method which incorporates name variations of the ground-truth keyphrases. As a result, it can more robustly evaluate the quality of generated keyphrases. One potential future direction is to investigate the performance of other encoder-decoder architectures on keyphrase generation such as Transformer (Vaswani et al., 2017) with multi-head attention module (Li et al., 2018; Zhang et al., 2018a). Another interesting direction is to apply our RL approach on the microblog hashtag annotation problem (Wang et al., 2019; Gong and Zhang, 2016; Zhang et al., 2018b). Acknowledgments The work described in this paper was partially supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 of the General Research Fund) and Meitu (No. 7010445). Lu Wang is supported in part by National Science Foundation through Grants IIS-1566382 and IIS-1813341, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA865017-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. We would like to thank Jiani Zhang, and the three anonymous reviewers for their comments. 2172 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). G´abor Berend. 2011. Opinion expression mining by exploiting keyphrase extraction. In Fifth International Joint Conference on Natural Language Processing, IJCNLP 2011, Chiang Mai, Thailand, November 8-13, 2011, pages 1162–1170. Jun Chen, Xiaoming Zhang, Yu Wu, Zhao Yan, and Zhoujun Li. 2018a. Keyphrase generation with correlation constraints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 November 4, 2018, pages 4057–4066. Wang Chen, Hou Pong Chan, Piji Li, Lidong Bing, and Irwin King. 2019. An integrated approach for keyphrase generation via exploring the power of retrieval and extraction. CoRR, abs/1904.03454. Wang Chen, Yifan Gao, Jiani Zhang, Irwin King, and Michael R. Lyu. 2018b. Title-guided encoding for keyphrase generation. CoRR, abs/1808.08575. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724– 1734. Sujatha Das Gollapalli, Xiaoli Li, and Peng Yang. 2017. Incorporating expert knowledge into keyphrase extraction. In Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3180–3187. Yuyun Gong and Qi Zhang. 2016. Hashtag recommendation using attention-based convolutional neural network. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2782–2788. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Khaled M. Hammouda, Diego N. Matute, and Mohamed S. Kamel. 2005. Corephrase: Keyphrase extraction for document clustering. In Machine Learning and Data Mining in Pattern Recognition, 4th International Conference, MLDM 2005, Leipzig, Germany, July 9-11, 2005, Proceedings, pages 265–274. Anette Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP 2003, Sapporo, Japan, July 11-12, 2003. Anette Hulth and Be´ata Megyesi. 2006. A study on automatically extracted keywords in text categorization. In ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. Semeval-2010 task 5 : Automatic keyphrase extraction from scientific articles. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval@ACL 2010, Uppsala University, Uppsala, Sweden, July 15-16, 2010, pages 21–26. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Mikalai Krapivin, Aliaksandr Autaeu, and Maurizio Marchese. 2009. Large dataset for keyphrases extraction. Technical report, University of Trento. Tho Thi Ngoc Le, Minh Le Nguyen, and Akira Shimazu. 2016. Unsupervised keyphrase extraction: Introducing new kinds of words to keyphrases. In AI 2016: Advances in Artificial Intelligence - 29th Australasian Joint Conference, Hobart, TAS, Australia, December 5-8, 2016, Proceedings, pages 665–671. Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu, and Tong Zhang. 2018. Multi-head attention with disagreement regularization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2897–2903. Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2017. Improved image captioning via policy gradient optimization of spider. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 873–881. Yi Luan, Mari Ostendorf, and Hannaneh Hajishirzi. 2017. Scientific information extraction with semisupervised neural tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2641–2651. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412–1421. 2173 Olena Medelyan, Eibe Frank, and Ian H. Witten. 2009. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP 2009, 6-7 August 2009, Singapore, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1318–1327. Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 582–592. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing , EMNLP 2004, A meeting of SIGDAT, a Special Interest Group of the ACL, held in conjunction with ACL 2004, 25-26 July 2004, Barcelona, Spain, pages 404–411. Thuy Dung Nguyen and Min-Yen Kan. 2007a. Keyphrase extraction in scientific publications. In Asian Digital Libraries. Looking Back 10 Years and Forging New Frontiers, 10th International Conference on Asian Digital Libraries, ICADL 2007, Hanoi, Vietnam, December 10-13, 2007, Proceedings, pages 317–326. Thuy Dung Nguyen and Min-Yen Kan. 2007b. Keyphrase extraction in scientific publications. In Asian Digital Libraries. Looking Back 10 Years and Forging New Frontiers, 10th International Conference on Asian Digital Libraries, ICADL 2007, Hanoi, Vietnam, December 10-13, 2007, Proceedings, pages 317–326. Ramakanth Pasunuru and Mohit Bansal. 2017. Reinforced video captioning with entailment rewards. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 979–985. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations (ICLR). Martin F. Porter. 2006. An algorithm for suffix stripping. Program, 40(3):211–218. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. CoRR, abs/1511.06732. Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1179–1195. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073– 1083. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement learning - an introduction. Adaptive computation and machine learning. MIT Press. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Xiaojun Wan and Jianguo Xiao. 2008. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, Chicago, Illinois, USA, July 13-17, 2008, pages 855–860. Li Wang, Junlin Yao, Yunzhe Tao, Li Zhong, Wei Liu, and Qiang Du. 2018. A reinforced topic-aware convolutional sequence-to-sequence model for abstractive text summarization. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4453–4460. Lu Wang and Claire Cardie. 2013. Domainindependent abstract generation for focused meeting summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 1395–1405. Minmei Wang, Bo Zhao, and Yihua Huang. 2016. PTR: phrase-based topical ranking for automatic keyphrase extraction in scientific publications. In Neural Information Processing - 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16-21, 2016, Proceedings, Part IV, pages 120–128. Yue Wang, Jing Li, Irwin King, Michael R Lyu, and Shuming Shi. 2019. Microblog hashtag generation via encoding conversation contexts. CoRR, abs/1905.07584. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-Manning. 1999. KEA: practical automatic keyphrase extraction. In Proceedings of the Fourth ACM conference on Digital Libraries, August 11-14, 1999, Berkeley, CA, USA, pages 254–255. 2174 Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, and Klaus Macherey et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Hai Ye and Lu Wang. 2018. Semi-supervised learning for neural keyphrase generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4142–4153. Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Daqing He, and Adam Trischler. 2018. Generating diverse numbers of diverse keyphrases. CoRR, abs/1810.05241. Jiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, and Dit-Yan Yeung. 2018a. Gaan: Gated attention networks for learning on large and spatiotemporal graphs. In Proceedings of the ThirtyFourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10, 2018, pages 339–349. Qi Zhang, Yang Wang, Yeyun Gong, and Xuanjing Huang. 2016. Keyphrase extraction using deep recurrent neural networks on twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 836–845. Wei Zhang, Yan Chuan Sim, Jian Su, and Chew Lim Tan. 2011. Entity linking with effective acronym expansion, instance selection, and topic modeling. In IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011, pages 1909–1914. Wei Zhang, Jian Su, Chew Lim Tan, and Wenting Wang. 2010. Entity linking leveraging automatically generated annotation. In COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2010, Beijing, China, pages 1290–1298. Yingyi Zhang, Jing Li, Yan Song, and Chengzhi Zhang. 2018b. Encoding conversation context for neural keyphrase extraction from microblog posts. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1676–1686. Yongzheng Zhang, A. Nur Zincir-Heywood, and Evangelos E. Milios. 2004. World wide web site summarization. Web Intelligence and Agent Systems, 2(1):39–53.
2019
208
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2175–2189 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2175 Scoring Sentence Singletons and Pairs for Abstractive Summarization Logan Lebanoff† Kaiqiang Song† Franck Dernoncourt§ Doo Soon Kim§ Seokhwan Kim§ Walter Chang§ Fei Liu† †Computer Science Department, University of Central Florida, Orlando, FL 32816 {loganlebanoff, kqsong}@knight.ucf.edu [email protected] §Adobe Research, San Jose, CA 95110 {dernonco,dkim,seokim,wachang}@adobe.com Abstract When writing a summary, humans tend to choose content from one or two sentences and merge them into a single summary sentence. However, the mechanisms behind the selection of one or multiple source sentences remain poorly understood. Sentence fusion assumes multi-sentence input; yet sentence selection methods only work with single sentences and not combinations of them. There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs. This paper attempts to bridge the gap by ranking sentence singletons and pairs together in a unified space. Our proposed framework attempts to model human methodology by selecting either a single sentence or a pair of sentences, then compressing or fusing the sentence(s) to produce a summary sentence. We conduct extensive experiments on both single- and multidocument summarization datasets and report findings on sentence selection and abstraction. 1 Introduction Abstractive summarization aims at presenting the main points of an article in a succinct and coherent manner. To achieve this goal, a proficient editor can rewrite a source sentence into a more succinct form by dropping inessential sentence elements such as prepositional phrases and adjectives. She can also choose to fuse multiple source sentences into one by reorganizing the points in a coherent manner. In fact, it appears to be common practice to summarize by either compressing single sentences or fusing multiple sentences. We investigate this hypothesis by analyzing human-written abstracts contained in three large datasets: DUC04 (Over and Yen, 2004), CNN/Daily Mail (Hermann et al., 2015), and XSum (Narayan et al., 2018). For every summary sentence, we find its ground-truth set containing one or more source CNN/DM DUC−04 XSum 0% 20% 40% 60% 80% InstanceType Compression (1) Fusion (2) Fusion (3+) Figure 1: Portions of summary sentences generated by compression (content is drawn from 1 source sentence) and fusion (content is drawn from 2 or more source sentences). Humans often grab content from 1 or 2 document sentences when writing a summary sentence. sentences that exhibit a high degree of similarity with the summary sentence (details in §4). As shown in Figure 1, across the three datasets, 6085% of summary sentences are generated by fusing one or two source sentences. Selecting summary-worthy sentences has been studied in the literature, but there lacks a mechanism to weigh sentence singletons and pairs in a unified space. Extractive methods focus on selecting sentence singletons using greedy (Carbonell and Goldstein, 1998), optimization-based (Gillick and Favre, 2009; Kulesza and Taskar, 2011; Cho et al., 2019), and (non-)autoregressive methods (Cheng and Lapata, 2016; Kedzie et al., 2018). In contrast, existing sentence fusion studies tend to assume ground sets of source sentences are already provided, and the system fuses each set of sentences into a single one (Daum´e III and Marcu, 2004; Filippova, 2010; Thadani and McKeown, 2013). There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs. This paper attempts to bridge the gap by ranking singletons and pairs together by their likelihoods of producing summary sentences. The selection of sentence singletons and pairs can bring benefit to neural abstractive summarization, as a number of studies seek to separate content selection from summary generation (Chen 2176 and Bansal, 2018; Hsu et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018). Content selection draws on domain knowledge to identify relevant content, while summary generation weaves together selected source and vocabulary words to form a coherent summary. Despite having local coherence, system summaries can sometimes contain erroneous details (See et al., 2017) and forged content (Cao et al., 2018b; Song et al., 2018). Separating the two tasks of content selection and summary generation allows us to closely examine the compressing and fusing mechanisms of an abstractive summarizer. In this paper we propose a method to learn to select sentence singletons and pairs, which then serve as the basis for an abstractive summarizer to compose a summary sentence-by-sentence, where singletons are shortened (i.e., compressed) and pairs are merged (i.e., fused). We exploit stateof-the-art neural representations and traditional vector space models to characterize singletons and pairs; we then provide suggestions on the types of representations useful for summarization. Experiments are performed on both single- and multi-document summarization datasets, where we demonstrate the efficacy of selecting sentence singletons and pairs as well as its utility to abstractive summarization. Our research contributions can be summarized as follows: • the present study fills an important gap by selecting sentence singletons and pairs jointly, assuming a summary sentence can be created by either shortening a singleton or merging a pair. Compared to abstractive summarizers that perform content selection implicitly, our method is flexible and can be extended to multi-document summarization where training data is limited; • we investigate the factors involved in representing sentence singletons and pairs. We perform extensive experiments and report findings on sentence selection and abstraction.1 2 Related Work Content selection is integral to any summarization system. Neural approaches to abstractive summarization often perform content selection jointly with surface realization using an encoder-decoder architecture (Rush et al., 2015; Nallapati et al., 1We make our code and models publicly available at https: //github.com/ucfnlp/summarization-sing-pair-mix 2016; Chen et al., 2016b; Tan et al., 2017; See et al., 2017; Paulus et al., 2017; Celikyilmaz et al., 2018; Narayan et al., 2018). Training these models end-to-end means learning to perform both tasks simultaneously and can require a massive amount of data that is unavailable and unaffordable for many summarization tasks. Recent approaches emphasize the importance of separating content selection from summary generation for abstractive summarization. Studies exploit extractive methods to identify content words and sentences that should be part of the summary and use them to guide the generation of abstracts (Chen and Bansal, 2018; Gehrmann et al., 2018; Lebanoff et al., 2018). On the other hand, surface lexical features have been shown to be effective in identifying pertinent content (Carenini et al., 2006; Wong et al., 2008; Galanis et al., 2012). Examples include sentence length, position, centrality, word frequency, whether a sentence contains topic words, and others. The surface cues can also be customized for new domains relatively easily. This paper represents a step forward in this direction, where we focus on developing lightweight models to select summary-worthy sentence singletons and pairs and use them as the basis for summary generation. A succinct sentence can be generated by shortening or rewriting a lengthy source text. Recent studies have leveraged neural encoder-decoder models to rewrite the first sentence of an article to a title-like summary (Nallapati et al., 2016; Zhou et al., 2017; Li et al., 2017; Song et al., 2018; Guo et al., 2018; Cao et al., 2018a). Compressive summaries can be generated in a similar vein by selecting important source sentences and then dropping inessential sentence elements such as prepositional phrases. Before the era of deep neural networks it has been an active area of research, where sentence selection and compression can be accomplished using a pipeline or a joint model (Daum´e III and Marcu, 2002; Zajic et al., 2007; Gillick and Favre, 2009; Wang et al., 2013; Li et al., 2013, 2014; Filippova et al., 2015). A majority of these studies focus on selecting and compressing sentence singletons only. A sentence can also be generated through fusing multiple source sentences. However, many aspects of this approach are largely underinvestigated, such as determining the set of source sentences to be fused, handling its large cardinality, 2177 Sentence Pair: Merged Sentence: (A) The bombing killed 58 people. Pakistan denies its spy agency helped plan bombing that (B) Wajid Shamsul Hasan, Pakistan’s high commissioner to Britain, and Hamid Gul, killed 58. former head of the ISI, firmly denied the agency’s involvement in the attack. Sentence Singleton: Compressed Sentence: (A) Pakistani Maj. Gen. Athar Abbas said the report “unfounded and malicious” and Maj. Gen. Athar Abbas said the report was an “effort to an “effort to malign the ISI,” – Pakistan’s directorate of inter-services intelligence. malign the ISI.” Table 1: Example sentence singleton and pair, before and after compression/merging. and identifying the sentence relationships for performing fusion. Previous studies assume a set of similar source sentences can be gathered by clustering sentences or by comparing to a reference summary sentence (Barzilay and McKeown, 2005; Filippova, 2010; Shen and Li, 2010; Chenal and Cheung, 2016; Liao et al., 2018); but these methods can be suboptimal. Joint models for sentence selection and fusion implicitly perform content planning (Martins and Smith, 2009; BergKirkpatrick et al., 2011; Bing et al., 2015; Durrett et al., 2016) and there is limited control over which sentences are merged and how. In contrast, this work attempts to teach the system to determine if a sentence singleton or a pair should be selected to produce a summary sentence. A sentence pair (A, B) is preferred over its consisting sentences if they carry complementary content. Table 1 shows an example. Sentence B contains a reference (“the attack”) and A contains a more complete description for it (“bombing that killed 58”). Sentences A and B each contain certain valuable information, and an appropriate way to merge them exists. As a result, a sentence pair can be scored higher than a singleton given the content it carries and compatibility of its consisting sentences. In the following we describe methods to represent singletons and pairs in a unified framework and scoring them for summarization. 3 Our Model We present the first attempt to transform sentence singletons and pairs to real-valued vector representations capturing semantic salience so that they can be measured against each other (§3.1). This is a nontrivial task, as it requires a direct comparison of texts of varying length—a pair of sentences is almost certainly longer than a single sentence. For sentence pairs, the representations are expected to further encode sentential semantic compatibility. In §3.2, we describe our method to utilize highest scoring singletons and pairs to a neural abstractive summarizer to generate summaries. 3.1 Scoring Sentence Singletons and Pairs Given a document or set of documents, we create a set D of singletons and pairs by gathering all single sentences and arbitrary pairs of them. We refer to a singleton or pair in the set as an instance. The sentences in a pair are arranged in order of their appearance in the document or by date of documents. Let N be the number of single sentences in the input document(s), a complete set of singletons and pairs will contain |D|=N(N−1) 2 +N instances. Our goal is to score each instance based on the amount of summary-worthy content it conveys. Despite their length difference, a singleton can be scored higher than a pair if it contains a significant amount of salient content. Conversely, a pair can outweigh a singleton if its component sentences are salient and compatible with each other. Building effective representations for singletons and pairs is therefore of utmost importance. We attempt to build a vector representation for each instance. The representation should be invariant to the instance type, i.e., a singleton or pair. In this paper we exploit the BERT architecture (Devlin et al., 2018) to learn instance representations. The representations are fine-tuned for a classification task predicting whether a given instance contains content used in human-written summary sentences (details for ground-truth creation in §4). BERT BERT supports our goal of encoding singletons and pairs indiscriminately. It introduces two pretraining tasks to build deep contextual representations for words and sequences. A sequence can be a single sentence (A) or pair of sentences (A+B).2 The first task predicts missing words in the input sequence. The second task predicts if B is the next sentence following A. It requires the vector representation for (A+B) to capture the coherence of two sentences. As coherent sentences can often be fused together, we conjecture that the second task is particularly suited for our goal. 2In the original BERT paper (Devlin et al., 2018), a “sentence” is used in a general sense to denote an arbitrary span of contiguous text; we refer to an actual linguistic sentence. 2178 Concretely, BERT constructs an input sequence by prepending a singleton or pair with a “[CLS]” symbol and delimiting the two sentences of a pair with “[SEP].” The representation learned for the [CLS] symbol is used as an aggregate sequence representation for the later classification task. We show an example input sequence in Eq. (1). In the case of a singleton, wB i are padding tokens. {wi}=[CLS],wA 1,wA 2,..., [SEP],wB 1,wB 2,..., [SEP] (1) ei=ew(wi)+esgmt(wi)+ewpos(wi)+espos(wi) (2) In Eq. (2), each token wi is characterized by an input embedding ei, calculated as the elementwise sum of the following embeddings: • ew(wi) is a token embedding; • esgmt(wi) is a segment embedding, signifying whether wi comes from sentence A or B. • ewpos(wi) is a word position embedding indicating the index of wi in the input sequence; • we introduce espos(wi) to be a sentence position embedding; if wi is from sentence A (or B), espos(wi) is the embedding indicating the index of sentence A (or B) in the original document. Intuitively, these embeddings mean that, the extent to which a word contributes to the sequence (A+B) representation depends on these factors: (i) word salience, (ii) importance of sentences A and B, (iii) word position in the sequence, and, (iv) sentence position in the document. These factors coincide with heuristics used in summarization literature (Nenkova and McKeown, 2011), where leading sentences of a document and the first few words of a sentence are more likely to be included in the summary. The input embeddings are then fed to a multilayer and multi-head attention architecture to build deep contextual representations for tokens. Each layer employs a Transformer block (Vaswani et al., 2017), which introduces a self-attention mechanism that allows each hidden state hl i to be compared with every other hidden state of the same layer [hl 1, hl 2, . . . , hl N] using a parallelizable, multi-head attention mechanism (Eq. (3-4)). h1 i = f1 self-attn(ei, [e1, e2, . . . , eN]) (3) hl+1 i = fl+1 self-attn(hl i, [hl 1, hl 2, . . . , hl N]) (4) The representation at final layer L for the [CLS] symbol is used as the sequence representation hL [CLS]. The representations can be fine-tuned with an additional output layer to generate state-ofthe-art results on a wide range of tasks including reading comprehension and natural language inference. We use the pretrained BERT base model and fine-tune it on our specific task of predicting if an instance (a singleton or pair) pinst = σ(w⊤hL [CLS]) is an appropriate one, i.e., belonging to the ground-truth set of summary instances for a given document. At test time, the architecture indiscriminately encodes a mixed collection of sentence singletons/pairs. We then obtain a likelihood score for each instance. This framework is thus a first effort to build semantic representations for singletons and pairs capturing informativeness and semantic compatibility of two sentences. VSM We are interested in contrasting BERT with the traditional vector space model (Manning et al., 2008) for representing singletons and pairs. BERT learns instance representations by attending to important content words, where the importance is signaled by word and position embeddings as well as pairwise word relationships. Nonetheless, it remains an open question whether BERT can successfully weave the meaning of topically important words into representations. A word “border” is topically important if the input document discusses border security. A topic word is likely to be repeatedly mentioned in the input document but less frequently elsewhere. Because sentences containing topical words are often deemed summaryworthy (Hong and Nenkova, 2014), it is desirable to represent sentence singletons and pairs based on the amount of topical content they convey. VSM represents each sentence as a sparse vector. Each dimension of the vector corresponds to an n-gram weighted by its TF-IDF score. A high TF-IDF score suggests the n-gram is important to the topic of discussion. We further strengthen the sentence vector with position and centrality information, i.e., the sentence position in the document and the cosine similarity between the sentence and document vector. We obtain a document vector by averaging over its sentence vectors, and we similarly obtain a vector for a pair of sentences. We use VSM representations as a baseline to contrast its performance with distributed representations from BERT. To score singletons and pairs, we use the LambdaMART model3 which has demonstrated success on related NLP tasks (Chen et al., 2016a); 3https://sourceforge.net/p/lemur/wiki/RankLib/ 2179 it also fits our requirements of ranking singletons and pairs indiscriminately. 3.2 Generating Summaries We proceed by performing a preliminary investigation of summary generation from singletons and pairs; they are collectively referred to as instances. In the previous section, a set of summary instances is selected from a document. These instances are treated as “raw materials” for a summary; they are fed to a neural abstractive summarizer which processes them into summary sentences via fusion and compression. This strategy allows us to separately evaluate the contributions from instance selection and summary composition. We employ the MMR principle (Carbonell and Goldstein, 1998) to select a set of highest scoring and non-redundant instances. The method adds an instance ˆP to the summary S iteratively per Eq. (5) until a length threshold has been reached. Each instance is weighted by a linear combination of its importance score I(Pk), obtained by BERT or VSM, and its redundancy score R(Pk), computed as the cosine similarity between the instance and partial summary. λ is a balancing factor between importance and redundancy.4 Essentially, MMR prevents the system from selecting instances that are too similar to ones already selected. ˆP = arg max Pk∈D\S h λI(Pk) −(1 −λ)R(Pk) i (5) Composing a summary from selected instances is a non-trivial task. As a preliminary investigation of summary composition, we make use of pointergenerator (PG) networks (See et al., 2017) to compress/fuse sentences into summary sentences. PG is a sequence-to-sequence model that has achieved state-of-the-art performance in abstractive summarization by having the ability to both copy tokens from the document or generate new tokens from the vocabulary. When trained on documentsummary pairs, the model has been shown to remove unnecessary content from sentences and can merge multiple sentences together. In this work, rather than training on documentsummary pairs, we train PG exclusively on ground-truth instances. This removes most of the responsibility of content selection, and allows it to focus its efforts on merging the sentences. We use instances derived from human summaries (§4) to 4We use a coefficient λ of 0.6. Input Document(s) Scoring Singletons and Pairs 0.30 0.86 0.25 0.02 0.61 Encoder Decoder 2nd Summ Sent 1st Summ Sent Content Selection Summary Generation Figure 2: System architecture. In this example, a sentence pair is chosen (red) and then merged to generate the first summary sentence. Next, a sentence singleton is selected (blue) and compressed for the second summary sentence. train the network, which includes a sentence singleton or pair along with the ground-truth compressed/merged sentence. At test time, the network receives an instance from BERT or VSM and outputs a summary sentence, then repeats this process to generate several sentences. In Figure 2 we present an illustration of the system architecture. 4 Data Our method does not require a massive amount of annotated data. We thus report results on singleand multi-document summarization datasets. We experiment with (i) XSum (Narayan et al., 2018), a new dataset created for extreme, abstractive summarization. The task is to reduce a news article to a short, one-sentence summary. Both source articles and reference summaries are gathered from the BBC website. The training set contains about 204k article-summary pairs and the test contains 11k pairs. (ii) CNN/DM (Hermann et al., 2015), an abstractive summarization dataset frequently exploited by recent studies. The task is to reduce a news article to a multi-sentence summary (4 sentences on average). The training set contains about 287k article-summary pairs and the test set contains 11k pairs. We use the non-anonymzied version of the dataset. (iii) DUC-04 (Over and Yen, 2004), a benchmark multi-document summarization dataset. The task is to create an abstractive summary (5 sentences on average) from a set of 10 documents discussing a given topic. The dataset contains 50 sets of documents used for testing purpose only. Each document set is associated with four human reference summaries. We build a training set for both tasks of content selection and summary generation. This is done 2180 by creating ground-truth sets of instances based on document-summary pairs. Each document and summary pair (D, S) is a collection of sentences D = {d1, d2, ..., dM} and S = {s1, s2, ..., sN}. We wish to associate each summary sentence sn with a subset of the document sentences ˜D ⊆D, which are the sentences that are merged to form sn. Our method chooses multiple sentences that work together to capture the most overlap with summary sentence sn, in the following way. We use averaged ROUGE-1, -2, -L scores (Lin, 2004) to represent sentence similarity. The source sentence most similar to sn is chosen, which we call ˜d1. All shared words are then removed from sn to create s′ n, effectively removing all information already captured by ˜d1. A second source sentence ˜d2 is selected that is most similar to the remaining summary sentence s′ n, and shared words are again removed from s′ n to create s′′ n. This process of sentence selection and overlap removal is repeated until no remaining sentences have at least two overlapping content words (words that are non-stopwords or punctuation) with sn. The result is referred to as a ground-truth set (sn, ˜D) where ˜D = { ˜d1, ˜d2, ..., ˜d| ˜D|}. To train the models, ˜D is limited to one or two sentences because it captures the large majority of cases. All empty ground-truth sets are removed, and only the first two sentences are chosen for all ground-truth sets with more than two sentences. A small number of summary sentences have empty ground-truth sets, corresponding to 2.85%, 9.87%, 5.61% of summary sentences in CNN/DM, XSum, and DUC-04 datasets. A detailed plot of the ground-truth set size is illustrated in Figure 1, and samples of the ground-truth are found in the supplementary. We use the standard train/validation/test splits for both CNN/Daily Mail and XSum. We train our models on ground-truth sets of instances created from the training sets and tune hyperparameters using instances from the validation sets. DUC-04 is a test-only dataset, so we use the models trained on CNN/Daily Mail to evaluate DUC-04. Because the input is in the form of multiple documents, we select the first 20 sentences from each document and concatenate them together into a single megadocument (Lebanoff et al., 2018). For the sentence position feature, we keep the sentence positions from the original documents. This handling of sentence position, along with other features that are invariant to the input type, allows us to effectively train on single-document inputs and transfer to the multi-document setting. 5 Results Evaluation Setup In this section we evaluate our proposed methods on identifying summaryworthy instances including singletons and pairs. We compare this scheme with traditional methods extracting only singletons, then introduce novel evaluation strategies to compare results. We exploit several strong extractive baselines: (i) SumBasic (Vanderwende et al., 2007) extracts sentences by assuming words occurring frequently in a document have higher chances of being included in the summary; (ii) KL-Sum (Haghighi and Vanderwende, 2009) greedily adds sentences to the summary to minimize KL divergence; (iii) LexRank (Erkan and Radev, 2004) estimates sentence importance based on eigenvector centrality in a document graph representation. Further, we include the LEAD method that selects the first N sentences from each document. We then require all systems to extract N instances, i.e., either singletons or pairs, from the input document(s).5 We compare system-identified instances with ground-truth instances, and in particular, we compare against the primary, secondary, and full set of ground-truth sentences. A primary sentence is defined as a ground-truth singleton or a sentence in a ground-truth pair that has the highest similarity to the reference summary sentence; the other sentence in the pair is considered secondary, which provides complementary information to the primary sentence. E.g., let S∗={(1, 2), 5, (8, 4), 10} be a ground-truth set of instances, where numbers are sentence indices and the first sentence of each pair is primary. Our ground-truth primary set thus contains {1, 5, 8, 10}; secondary set contains {2, 4}; and the full set of ground-truth sentences contains {1, 2, 5, 8, 4, 10}. Assume S={(1, 2), 3, (4, 10), 15} are system-selected instances. We uncollapse all pairs to obtain a set of single sentences S={1, 2, 3, 4, 10, 15}, then compare them against the primary, secondary, and full set of ground-truth sentences to calculate precision, recall, and F1measure scores. This evaluation scheme allows a fair comparison of a variety of systems for instance selection, and assess their performance on 5 We use N=4/1/5 respectively for the CNN/DM, XSum, and DUC-04 datasets. N is selected as the average number of sentences in reference summaries. 2181 Primary Secondary All System P R F P R F P R F CNN/Daily Mail LEAD-Baseline 31.9 38.4 34.9 10.7 34.3 16.3 39.9 37.3 38.6 SumBasic (Vanderwende et al., 2007) 15.2 17.3 16.2 5.3 15.8 8.0 19.6 16.9 18.1 KL-Summ (Haghighi et al., 2009) 15.7 17.9 16.7 5.4 15.9 8.0 20.0 17.4 18.6 LexRank (Erkan and Radev, 2004) 22.0 25.9 23.8 7.2 21.4 10.7 27.5 24.7 26.0 VSM-SingOnly (This work) 30.8 36.9 33.6 9.8 34.4 15.2 39.5 35.7 37.5 VSM-SingPairMix (This work) 27.0 46.5 34.2 9.0 42.1 14.9 34.0 45.4 38.9 BERT-SingOnly (This work) 35.3 41.9 38.3 9.8 32.5 15.1 44.0 38.6 41.1 BERT-SingPairMix (This work) 33.6 67.1 44.8 13.6 70.2 22.8 44.7 68.0 53.9 XSum LEAD-Baseline 8.5 9.4 8.9 5.3 9.5 6.8 13.8 9.4 11.2 SumBasic (Vanderwende et al., 2007) 8.7 9.7 9.2 5.0 8.9 6.4 13.7 9.4 11.1 KL-Summ (Haghighi et al., 2009) 9.2 10.2 9.7 5.0 8.9 6.4 14.2 9.7 11.5 LexRank (Erkan and Radev, 2004) 9.7 10.8 10.2 5.5 9.8 7.0 15.2 10.4 12.4 VSM-SingOnly (This work) 12.3 14.1 13.1 3.8 11.0 5.6 17.9 12.0 14.4 VSM-SingPairMix (This work) 10.1 22.6 13.9 4.2 17.4 6.8 14.3 20.8 17.0 BERT-SingOnly (This work) 24.2 26.1 25.1 6.6 16.7 9.5 35.3 20.8 26.2 BERT-SingPairMix (This work) 33.2 56.0 41.7 24.1 65.5 35.2 57.3 59.6 58.5 DUC-04 LEAD-Baseline 6.0 4.8 5.3 2.8 3.8 3.2 8.8 4.4 5.9 SumBasic (Vanderwende et al., 2007) 4.2 3.2 3.6 3.0 3.8 3.3 7.2 3.4 4.6 KL-Summ (Haghighi et al., 2009) 5.6 4.5 5.0 2.8 3.8 3.2 8.0 4.2 5.5 LexRank (Erkan and Radev, 2004) 8.5 6.7 7.5 4.8 6.5 5.5 12.1 6.6 8.6 VSM-SingOnly (This work) 18.0 14.7 16.2 3.6 8.4 5.0 23.6 11.8 15.7 VSM-SingPairMix (This work) 3.8 6.2 4.7 3.6 11.4 5.5 7.4 8.0 7.7 BERT-SingOnly (This work) 8.4 6.5 7.4 2.8 5.3 3.7 15.6 6.6 9.2 BERT-SingPairMix (This work) 4.8 9.1 6.3 4.2 14.2 6.5 9.0 10.9 9.9 Table 2: Instance selection results; evaluated for primary, secondary, and all ground-truth sentences. Our BERTSingPairMix method achieves strong performance owing to its capability of building effective representations for both singletons and pairs. identifying primary and secondary sentences respectively for summary generation. Extraction Results In Table 2 we present instance selection results for the CNN/DM, XSum, and DUC-04 datasets. Our method builds representations for instances using either BERT or VSM (§3.1). To ensure a thorough comparison, we experiment with selecting a mixed set of singletons and pairs (“SingPairMix”) as well as selecting singletons only (“SingOnly”). On the CNN/DM and XSum datasets, we observe that selecting a mixed set of singletons and pairs based on BERT representations (BERT+SingPairMix) demonstrates the most competitive results. It outperforms a number of strong baselines when evaluated on a full set of ground-truth sentences. The method also performs superiorly on identifying secondary sentences. For example, it increases recall scores for identifying secondary sentences from 33.8% to 69.8% (CNN/DM) and from 16.7% to 65.3% (XSum). Our method is able to achieve strong performance on instance selection owing to BERT’s capability of building effective representations for both singletons and pairs. It learns to identify salient source content based on token and position embeddings and it encodes sentential semantic compatibility using the pretraining task of predicting the next sentence; both are valuable additions to summary instance selection. Further, we observe that identifying summaryworthy singletons and pairs from multi-document inputs (DUC-04) appears to be more challenging than that of single-document inputs (XSum and CNN/DM). This distinction is not surprising given that for multi-document inputs, the system has a large and diverse search space where candidate singletons and pairs are gathered from a set of documents written by different authors.6 We find that the BERT model performs consistently on identifying secondary sentences, and VSM yields considerable performance gain on selecting primary sentences. Both BERT and VSM models are trained on the CNN/DM dataset and applied to DUC-04 as the latter data are only used for testing. Our findings suggest that the TF-IDF features of the VSM model are effective for multi-document 6For the DUC-04 dataset, we select top K sentences from each document (K=5) and pool them as candidate singletons. Candidate pairs consist of arbitrary combinations of singletons. For all datasets we perform downsampling to balance the number of positive and negative singletons (or pairs). 2182 CNN/Daily Mail System R-1 R-2 R-L SumBasic (Vanderwende et al., 2007) 34.11 11.13 31.14 KLSumm (Haghighi et al., 2009) 29.92 10.50 27.37 LexRank (Erkan and Radev, 2004) 35.34 13.31 31.93 PointerGen+Cov (See et al., 2017) 39.53 17.28 36.38 BERT-Abs w/ SS (This Work) 35.49 15.12 33.03 BERT-Abs w/ PG (This Work) 37.15 15.22 34.60 BERT-Extr (This Work) 41.13 18.68 37.75 GT-SingPairMix (This Work) 48.73 26.59 45.29 XSum System R-1 R-2 R-L SumBasic (Vanderwende et al., 2007) 18.56 2.91 14.88 KLSumm (Haghighi et al., 2009) 16.73 2.83 13.53 LexRank (Erkan and Radev, 2004) 17.95 3.00 14.30 BERT-Abs w/ PG (This Work) 25.08 6.48 19.75 BERT-Extr (This Work) 23.53 4.54 17.23 GT-SingPairMix (This Work) 27.90 7.31 21.04 DUC-04 System R-1 R-2 R-SU4 SumBasic (Vanderwende et al., 2007) 29.48 4.25 8.64 KLSumm (Haghighi et al., 2009) 31.04 6.03 10.23 LexRank (Erkan and Radev, 2004) 34.44 7.11 11.19 Extract+Rewrite (Song et al., 2018) 28.90 5.33 8.76 Opinosis (Ganesan et al., 2010) 27.07 5.03 8.63 BERT-Abs w/ PG (This Work) 27.95 4.13 7.75 BERT-Extr (This Work) 30.49 5.12 9.05 GT-SingPairMix (This Work) 41.42 13.67 16.38 Table 3: Summarization results on various datasets. Whether abstractive summaries (BERT-Abst) outperform its extractive variant (BERT-Extr) appears to be related to the amount of sentence pairs selected by BERT-SingPairMix. Selecting more pairs than singletons seems to hurt the abstractor. inputs, as important topic words are usually repeated across documents and TF-IDF scores can reflect topical importance of words. This analysis further reveals that extending BERT to incorporate topical salience of words can be a valuable line of research for future work. Summarization Results We present summarization results in Table 3, where we assess both extractive and abstractive summaries generated by BERT-SingPairMix. We omit VSM results as they are not as competitive as BERT on instance selection for the mixed set of singletons and pairs. The extractive summaries “BERT-Extr” are formed by concatenating selected singletons and pairs for each document, whereas “GT-SingPairMix” concatenates ground-truth singletons and pairs; it provides an upper bound for any system generating a set of singletons and pairs as the summary. To assure fair comparison, we limit all extractive summaries to contain up to 100 words (40 words for CNN/DM Primary Primary Secondary XSum 0.0 0.2 0.4 0.6 0.8 1.0 Sent Position (Singles) DUC-04 0.0 0.2 0.4 0.6 0.8 1.0 Sent Position (Pairs) Figure 3: Position of ground-truth singletons and pairs in a document. The singletons of XSum can occur anywhere; the first and second sentence of a pair also appear far apart. XSum) for ROUGE evaluation7, where R-1, R-2, R-L, and R-SU4 are variants used to measure the overlap of unigrams, bigrams, longest common subsequences, and skip bigrams (with a maximum distance of 4) between system and reference summaries (Lin, 2004). The abstractive summaries are generated from the same singletons and pairs used to form system extracts. “BERT-Abs-PG” generates an abstract by iteratively encoding singletons or pairs and decoding summary sentences using pointer-generator networks (§3.2).8 Our BERT summarization systems achieve results largely on par with those of prior work. It is interesting to observe that the extractive variant (BERT-Extr) can outperform its abstractive counterparts on DUC-04 and CNN/DM datasets, and vice versa on XSum. A close examination of the results reveals that whether abstractive summaries outperform appears to be related to the amount of sentence pairs selected by “BERTSingPairMix.” Selecting more pairs than singletons seems to hurt the abstractor. For example, BERT selects 100% and 76.90% sentence pairs for DUC-04 and CNN/DM respectively, and 28.02% for XSum. These results suggest that existing abstractors using encoder-decoder models may need to improve on sentence fusion. These models are trained to generate fluent sentences more than preserving salient source content, leading to important content words being skipped in generating summary sentences. Our work intends to separate the tasks of sentence selection and summary generation, thus holding promise for improving compression and merging in the future. We present 7w/ ROUGE options: -n 2 -m -2 4 -w 1.2 -c 95 -r 1000 -l 100 8We include an additional in-house system “BERT-AbsSS” for CNN/DM that takes the same input but generates summary sentences using a tree-based decoder. 2183 CNN/DM DUC−04 XSum 1st 2nd 3rd 4th 5th 1st 2nd 3rd 4th 5th 1st 2nd 3rd 4th 5th 0% 20% 40% 60% 80% 100% Sentence in the Reference Summary InstanceType Compression Fusion Figure 4: A sentence’s position in a human summary can affect whether or not it is created by compression or fusion. example system summaries in the supplementary. Further analysis In this section we perform a series of analyses to understand where summaryworthy content is located in a document and how humans order them into a summary. Figure 3 shows the position of ground-truth singletons and pairs in a document. We observe that singletons of CNN/DM and DUC-04 tend to occur at the beginning of a document, whereas singletons of XSum can occur anywhere. We also find that the first and second sentence of a pair can appear far apart for XSum, but are closer for CNN/DM. These findings suggest that selecting singletons and pairs for XSum can be more challenging than others, as indicated by the name “extreme” summarization. Figure 4 illustrates how humans choose to organize content into a summary. Interestingly, we observe that a sentence’s position in a human summary affects whether or not it is created by compression or fusion. The first sentence of a humanwritten summary is more likely than the following sentences to be a fusion of multiple source sentences. This is the case across all three datasets. We conjecture that the first sentence of a summary is expected to give an overview of the document and needs to consolidate information from different parts. Other sentences of a human summary can be generated by simply shortening singletons. Our statistics reveal that DUC-04 and XSum summaries involve more fusion operations, exhibiting a higher level of abstraction than CNN/DM. 6 Conclusion We present an investigation into the feasibility of scoring singletons and pairs according to their likelihoods of producing summary sentences. Our framework is founded on the human process of selecting one or two sentences to merge together and it has the potential to bridge the gap between compression and fusion studies. Our method provides a promising avenue for domain-specific summarization where content selection and summary generation are only loosely connected to reduce the costs of obtaining massive annotated data. Acknowledgments We are grateful to the reviewers for their insightful comments that point to interesting future directions. The authors also thank students in the UCF NLP group for useful discussions. References Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3). Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca J. Passonneau. 2015. Abstractive multi-document summarization via phrase selection and merging. In Proceedings of ACL. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018a. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018b. Faithful to the original: Fact aware neural abstractive summarization. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). Giuseppe Carenini, Raymond Ng, and Adam Pauls. 2006. Multi-document summarization of evaluative text. In Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL). Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016a. A thorough examination of the 2184 cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016b. Distraction-based neural networks for document summarization. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI). Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Victor Chenal and Jackie Chi Kit Cheung. 2016. Predicting sentential semantic compatibility for aggregation in text-to-text generation. In Proceedings of the International Conference on Computational Linguistics (COLING). Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of ACL. Sangwoo Cho, Logan Lebanoff, Hassan Foroosh, and Fei Liu. 2019. Improving the similarity measure of determinantal point processes for extractive multidocument summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Hal Daum´e III and Daniel Marcu. 2002. A noisychannel model for document compression. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Hal Daum´e III and Daniel Marcu. 2004. Generic sentence fusion is an ill-defined summarization task. In Proceedings of ACL Workshop on Text Summarization Branches Out. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. In Proceedings of the Association for Computational Linguistics (ACL). G¨unes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research. Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceedings of the International Conference on Computational Linguistics (COLING). Katja Filippova, Enrique Alfonseca, Carlos Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Dimitrios Galanis, Gerasimos Lampouras, and Ion Androutsopoulos. 2012. Extractive multi-document summarization with integer linear programming and support vector regression. In Proceedings of the International Conference on Computational Linguistics (COLING). Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph-based approach to abstractive summarization of highly redundant opinions. In Proceedings of the International Conference on Computational Linguistics (COLING). Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the NAACL Workshop on Integer Linear Programming for Natural Langauge Processing. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft, layer-specific multi-task summarization with entailment and question generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of Neural Information Processing Systems (NIPS). Kai Hong and Ani Nenkova. 2014. Improving the estimation of word importance for news multidocument summarization. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL). Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 2185 Alex Kulesza and Ben Taskar. 2011. Learning determinantal point processes. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI). Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Chen Li, Fei Liu, Fuliang Weng, and Yang Liu. 2013. Document summarization via guided sentence compression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP). Chen Li, Yang Liu, Fei Liu, Lin Zhao, and Fuliang Weng. 2014. Improving multi-document summarization by sentence compression based on expanded constituent parse tree. In Proceedings of the Conference on Empirical Methods on Natural Language Processing (EMNLP). Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep recurrent generative decoder for abstractive text summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract meaning representation for multi-document summarization. In Proceedings of the International Conference on Computational Linguistics (COLING). Chin-Yew Lin. 2004. ROUGE: a package for automatic evaluation of summaries. In Proceedings of ACL Workshop on Text Summarization Branches Out. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to Information Retrieval. Cambridge University Press. Andre F. T. Martins and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proceedings of the ACL Workshop on Integer Linear Programming for Natural Language Processing. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of SIGNLL. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval. Paul Over and James Yen. 2004. An introduction to DUC-2004. National Institute of Standards and Technology. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for sentence summarization. In Proceedings of EMNLP. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Chao Shen and Tao Li. 2010. Multi-document summarization via the minimum dominating set. In Proceedings of the International Conference on Computational Linguistics (COLING). Kaiqiang Song, Lin Zhao, and Fei Liu. 2018. Structure-infused copy mechanisms for abstractive summarization. In Proceedings of the International Conference on Computational Linguistics (COLING). Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Kapil Thadani and Kathleen McKeown. 2013. Supervised sentence fusion with single-stage inference. In Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP). Lucy Vanderwende, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond SumBasic: Taskfocused summarization with sentence simplification and lexical expansion. Information Processing and Management, 43(6):1606–1618. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. https://arxiv.org/abs/1706.03762. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS). Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Florian, and Claire Cardie. 2013. A sentence compression based framework to query-focused multidocument summarization. In Proceedings of ACL. Kam-Fai Wong, Mingli Wu, and Wenjie Li. 2008. Extractive summarization using supervised and semisupervised learning. In Proceedings of the International Conference on Computational Linguistics (COLING). 2186 David Zajic, Bonnie J. Dorr, Jimmy Lin, and Richard Schwartz. 2007. Multi-candidate reduction: Sentence compression as a tool for document summarization tasks. Information Processing and Management. Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective encoding for abstractive sentence summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). A Ground-truth Sets of Instances We performed a manual inspection over a subset of our ground-truth sets of singletons and pairs. Each sentence from a human-written summary is matched with one or two source sentences based on average ROUGE similarity (details in Section 4 of the paper). Tables 4, 5, and 6 present randomly selected examples from CNN/Daily Mail, XSum, and DUC-04, respectively. Colored text represents overlapping tokens between sentences. Darker colors represent content from primary sentences, while lighter colors represent content from secondary sentences. Best viewed in color. B Example Summaries Table 7 presents example system summaries and human-written abstracts from CNN/Daily Mail. Each Human Abstract sentence is matched with a sentence singleton or pair from the source document; these singletons/pairs make up the GTSingPairMix summary. Similarly, each sentence from BERT-Abs is created by compressing a singleton or merging a pair selected by BERT-Extr. 2187 Selected Source Sentence(s) Human Summary Sentence an inmate housed on the “ forgotten floor , ” where many mentally ill inmates are housed in miami before trial . mentally ill inmates in miami are housed on the “ forgotten floor ” most often , they face drug charges or charges of assaulting an officer – charges that judge steven leifman says are usually “ avoidable felonies . ” judge steven leifman says most are there as a result of “ avoidable felonies ” “ i am the son of the president . miami , florida -lrbcnn -rrb- – the ninth floor of the miami-dade pretrial detention facility is dubbed the “ forgotten floor . ” while cnn tours facility , patient shouts : “ i am the son of the president ” it ’s brutally unjust , in his mind , and he has become a strong advocate for changing things in miami . so , he says , the sheer volume is overwhelming the system , and the result is what we see on the ninth floor . leifman says the system is unjust and he ’s fighting for change . Selected Source Sentence(s) Human Summary Sentence the average surface temperature has warmed one degree fahrenheit -lrb- 0.6 degrees celsius -rrb- during the last century , according to the national research council . earth has warmed one degree in past 100 years . the reason most cited – by scientists and scientific organizations – for the current warming trend is an increase in the concentrations of greenhouse gases , which are in the atmosphere naturally and help keep the planet ’s temperature at a comfortable level . in the worst-case scenario , experts say oceans could rise to overwhelming and catastrophic levels , flooding cities and altering seashores . majority of scientists say greenhouse gases are causing temperatures to rise . a change in the earth ’s orbit or the intensity of the sun ’s radiation could change , triggering warming or cooling . other scientists and observers , a minority compared to those who believe the warming trend is something ominous , say it is simply the latest shift in the cyclical patterns of a planet ’s life . some critics say planets often in periods of warming or cooling . Table 4: Sample of our ground-truth labels for singleton/pair instances from CNN/Daily Mail. Large chunks of text are copied straight out of the source sentences. Selected Source Sentence(s) Human Summary Sentence the premises , used by east belfast mp naomi long , have been targeted a number of times . army explosives experts were called out to deal with a suspect package at the offices on the newtownards road on friday night . a suspicious package left outside an alliance party office in east belfast has been declared a hoax . Selected Source Sentence(s) Human Summary Sentence nev edwards scored an early try for sale , before castres ’ florian vialelle went over , but julien dumora ’s penalty put the hosts 10-7 ahead at the break . a late penalty try gave sale victory over castres at stade pierre-antoine in their european challenge cup clash . Selected Source Sentence(s) Human Summary Sentence speaking in the dil , sinn fin leader gerry adams also called for a commission of investigation and said his party had “ little confidence the government is protecting the public interest ” . last year , nama sold its entire 850-property loan portfolio in northern ireland to the new york investment firm cerberus for more than # 1bn . the irish government has rejected calls to set up a commission of investigation into the sale of nama ’s portfolio of loans in northern ireland . Table 5: Sample of our ground-truth labels for singleton/pair instances from XSum. Each article has only one summary sentences, and thus only one singleton or pair matched with it. 2188 Selected Source Sentence(s) Human Summary Sentence hun sen ’s cambodian people ’s party won 64 of the 122 parliamentary seats in july ’s elections , short of the two-thirds majority needed to form a government on its own . cambodian elections , fraudulent according to opposition parties , gave the cpp of hun sen a scant majority but not enough to form its own government . opposition leaders prince norodom ranariddh and sam rainsy , citing hun sen ’s threats to arrest opposition figures after two alleged attempts on his life , said they could not negotiate freely in cambodia and called for talks at sihanouk ’s residence in beijing . cambodian leader hun sen has guaranteed the safety and political freedom of all politicians , trying to ease the fears of his rivals that they will be arrested or killed if they return to the country . opposition leaders fearing arrest , or worse , fled and asked for talks outside the country . the cambodian people ’s party criticized a non-binding resolution passed earlier this month by the u.s. house of representatives calling for an investigation into violations of international humanitarian law allegedly committed by hun sen . the un found evidence of rights violations by hun sen prompting the us house to call for an investigation . cambodian politicians expressed hope monday that a new partnership between the parties of strongman hun sen and his rival , prince norodom ranariddh , in a coalition government would not end in more violence . the three-month governmental deadlock ended with han sen and his chief rival , prince norodom ranariddh sharing power . citing hun sen ’s threats to arrest opposition politicians following two alleged attempts on his life , ranariddh and sam rainsy have said they do not feel safe negotiating inside the country and asked the king to chair the summit at gis residence in beijing . after a meeting between hun sen and the new french ambassador to cambodia , hun sen aide prak sokhonn said the cambodian leader had repeated calls for the opposition to return , but expressed concern that the international community may be asked for security guarantees . han sen guaranteed safe return to cambodia for all opponents but his strongest critic , sam rainsy , remained wary . diplomatic efforts to revive the stalled talks appeared to bear fruit monday as japanese foreign affairs secretary of state nobutaka machimura said king norodom sihanouk has called on ranariddh and sam rainsy to return to cambodia . king norodom sihanouk on tuesday praised agreements by cambodia ’s top two political parties – previously bitter rivals – to form a coalition government led by strongman hun sen . chief of state king norodom sihanouk praised the agreement . Table 6: Sample of our ground-truth labels for singleton/pair instances from DUC-04, a multi-document dataset. Ground-truth sentences are widely dispersed among all ten documents. 2189 Extractive Upper Bound • She’s a high school freshman with Down syndrome. • Trey – a star on Eastern High School’s basketball team in Louisville, Kentucky, who’s headed to play college ball next year at Ball State – was originally going to take his girlfriend to Eastern’s prom. • Trina Helson, a teacher at Eastern, alerted the school’s newspaper staff to the prom-posal and posted photos of Trey and Ellie on Twitter that have gone viral. BERT-Extractive • But all that changed Thursday when Trey asked Ellie to be his prom date. • Trey – a star on Eastern High School’s basketball team in Louisville, Kentucky, who’s headed to play college ball next year at Ball State – was originally going to take his girlfriend to Eastern’s prom. • Trina Helson, a teacher at Eastern, alerted the school’s newspaper staff to the prom-posal and posted photos of Trey and Ellie on Twitter that have gone viral. • (CNN) He’s a blue chip college basketball recruit. • She’s a high school freshman with Down syndrome. Human Abstract • College-bound basketball star asks girl with Down syndrome to high school prom. • Pictures of the two during the ”prom-posal” have gone viral. BERT-Abstractive • Trey asked Ellie to be his prom date. • Trina Helson, a teacher at Eastern, alerted the school’s newspaper staff. • He’s a high school student with Down syndrome. Extractive Upper Bound • Marseille prosecutor Brice Robin told CNN that ”so far no videos were used in the crash investigation.” • Reichelt told ”Erin Burnett: outfront” that he had watched the video and stood by the report, saying Bild and Paris Match are ”very confident” that the clip is real. • Lubitz told his Lufthansa flight training school in 2009 that he had a ”previous episode of severe depression,” the airline said Tuesday. BERT-Extractive • Marseille, France (CNN) - the French prosecutor leading an investigation into the crash of Germanwings flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. • Marseille prosecutor Brice Robin told CNN that ”so far no videos were used in the crash investigation.” • Robin’s comments follow claims by two magazines, German Daily Bild and French Paris Match, of a cell phone video showing the harrowing final seconds from on board Germanwings flight 9525 as it crashed into the French Alps. • The two publications described the supposed video, but did not post it on their websites. Human Abstract • Marseille prosecutor says ”so far no videos were used in the crash investigation” despite media reports. • Journalists at Bild and Paris Match are ”very confident” the video clip is real, an editor says. • Andreas Lubitz had informed his Lufthansa training school of an episode of severe depression, airline says. BERT-Abstractive • New : French prosecutor says he was not aware of video footage from on board the plane. • Two magazines, including German Daily Bild, have been described as the video. Table 7: Example system summaries and human-written abstracts. Each Human Abstract sentence is lined up horizontally with its corresponding ground-truth instance, which is found in Extractive Upper Bound summary. Similarly, each sentence from BERT-Abstractive is lined up horizontally with its corresponding instance selected by BERT-Extractive. The sentences are manually de-tokenized for readability.
2019
209
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 211–221 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 211 Revisiting Low-Resource Neural Machine Translation: A Case Study Rico Sennrich1,2 Biao Zhang1 1School of Informatics, University of Edinburgh [email protected], [email protected] 2Institute of Computational Linguistics, University of Zurich Abstract It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and recent techniques that have shown to be especially helpful in low-resource settings, resulting in a set of best practices for low-resource NMT. In our experiments on German–English with different amounts of IWSLT14 training data, we show that, without the use of any auxiliary monolingual or multilingual data, an optimized NMT system can outperform PBSMT with far less data than previously claimed. We also apply these techniques to a low-resource Korean–English dataset, surpassing previously reported results by 4 BLEU. 1 Introduction While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions (Koehn and Knowles, 2017; Lample et al., 2018b). In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows: • we explore best practices for low-resource 106 107 108 0 10 20 30 21.8 23.4 24.9 26.2 26.9 27.9 28.6 29.2 29.6 30.1 30.4 16.4 18.1 19.6 21.2 22.2 23.5 24.7 26.1 26.9 27.8 28.6 1.6 7.2 11.9 14.7 18.2 22.4 25.7 27.4 29.2 30.3 31.1 Corpus Size (English Words) BLEU Scores with Varying Amounts of Training Data Phrase-Based with Big LM Phrase-Based Neural Figure 3: BLEU scores for English-Spanish systems trained on 0.4 million to 385.7 million words of parallel data. Quality for NMT starts much lower, outperforms SMT at about 15 million words, and even beats a SMT system with a big 2 billion word in-domain language model under high-resource conditions. How do the data needs of SMT and NMT compare? NMT promises both to generalize better (exploiting word similary in embeddings) and condition on larger context (entire input and all prior output words). We built English-Spanish systems on WMT data,7 about 385.7 million English words paired with Spanish. To obtain a learning curve, we used 1 1024, 1 512, ..., 1 2, and all of the data. For SMT, the language model was trained on the Spanish part of each subset, respectively. In addition to a NMT and SMT system trained on each subset, we also used all additionally provided monolingual data for a big language model in contrastive SMT systems. Results are shown in Figure 3. NMT exhibits a much steeper learning curve, starting with abysmal results (BLEU score of 1.6 vs. 16.4 for 1 1024 of the data), outperforming SMT 25.7 vs. 24.7 with 1 16 of the data (24.1 million words), and even beating the SMT system with a big language model with the full data set (31.1 for NMT, 28.4 for SMT, 30.4 for SMT+BigLM). 7Spanish was last represented in 2013, we used data from http://statmt.org/wmt13/translation-task.html Src: A Republican strategy of Obama 1 1024 Un ´organo de coordin libre determinaci´on 1 512 Lista de una estrategi elecci´on de hojas de O 1 256 Explosi´on realiza una luchar contra las elecci 1 128 Una estrategia republi de la reelecci´on de Oba 1 64 Estrategia siria para co del Obama . 1 32+ Una estrategia republic reelecci´on de Obama Figure 4: Translations o the test set using NMT sys amounts of training data. U ditions, NMT produces flu the input. The contrast between the ing curves is quite striking exploit increasing amounts effectively, it is unable to training corpus sizes of a less. To illustrate this, see Fig training data, the output is the input, some key words with 1 512 and 1 256 of the da egy, elecci´on or elecciones ing with 1 64 the translations 3.3 Rare Words Conventional wisdom stat translation models perform rare words, (Luong et al., 2016b; Arthur et al., 201 smaller vocabularies used examine this claim by com rare word translation bet systems of similar quality and find that NMT system SMT systems on translati words. However, both N do continue to have diffi infrequent words, particula highly-inflected categories For the neural machine use a publicly available m settings of Edinburgh’s W nrich et al., 2016a). This 8https://github.com/rse 31 Figure 1: quality of PBSMT and NMT in low-resource conditions according to (Koehn and Knowles, 2017). NMT, evaluating their importance with ablation studies. • we reproduce a comparison of NMT and PBSMT in different data conditions, showing that when following our best practices, NMT outperforms PBSMT with as little as 100 000 words of parallel training data. 2 Related Work 2.1 Low-Resource Translation Quality Compared Across Systems Figure 1 reproduces a plot by Koehn and Knowles (2017) which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by Lample et al. (2018b) are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource set212 tings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions. 2.2 Improving Low-Resource Neural Machine Translation The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model (G¨ulc¸ehre et al., 2015) to the training of parts of the NMT model with additional objectives, including a language modelling objective (G¨ulc¸ehre et al., 2015; Sennrich et al., 2016b; Ramachandran et al., 2017), an autoencoding objective (Luong et al., 2016; Currey et al., 2017), or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language (Sennrich et al., 2016b; He et al., 2016; Cheng et al., 2016). As an extreme case, models that rely exclusively on monolingual data have been shown to work (Artetxe et al., 2018b; Lample et al., 2018a; Artetxe et al., 2018a; Lample et al., 2018b). Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations (Zoph et al., 2016; Chen et al., 2017; Nguyen and Chiang, 2017; Neubig and Hu, 2018; Gu et al., 2018a,b; Kocmi and Bojar, 2018). While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match (Søgaard et al., 2018) More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes ( ¨Ostling and Tiedemann, 2017; Nguyen and Chiang, 2018). 3 Methods for Low-Resource Neural Machine Translation 3.1 Mainstream Improvements We consider the hyperparameters used by Koehn and Knowles (2017) to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture (Miceli Barone et al., 2017), label smoothing (Szegedy et al., 2016), dropout (Srivastava et al., 2014), word dropout (Sennrich et al., 2016a), layer normalization (Ba et al., 2016) and tied embeddings (Press and Wolf, 2017). 3.2 Language Representation Subword representations such as BPE (Sennrich et al., 2016c) have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; Haddow et al. (2018) report mixed results when comparing vocabularies of 30k and 90k subwords. In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. Sennrich et al. (2017a) propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets.1 3.3 Hyperparameter Tuning Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and lowresource settings. While the trend in high-resource settings is towards using larger and deeper models, Nguyen and Chiang (2018) use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT (Morishita et al., 2017; Neishi et al., 2017), but we 1In related work, Cherry et al. (2018) have shown that, given deep encoders and decoders, character-level models can outperform other subword segmentations. In preliminary experiments, a character-level model performed poorly in our low-resource setting. 213 find that using smaller batches is beneficial in lowresource settings. More aggressive dropout, including dropping whole words at random (Gal and Ghahramani, 2016), is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition. 3.4 Lexical Model Finally, we implement and test the lexical model by Nguyen and Chiang (2018), which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step t is the weighted average of source embeddings f (the attention weights a are shared with the main model). After a feedforward layer (with skip connection), the lexical model’s output hl t is combined with the original model’s hidden state ho t before softmax computation. fl t = tanh X s at(s)fs hl t = tanh(Wfl t) + fl t p(yt|y<t, x) = softmax(W oho t + bo + W lhl t + bl) Our implementation adds dropout and layer normalization to the lexical model.2 4 Experiments 4.1 Data and Preprocessing We use the TED data from the IWSLT 2014 German→English shared translation task (Cettolo et al., 2014). We use the same data cleanup and train/dev split as Ranzato et al. (2016), resulting in 159 000 parallel sentences of training data, and 7584 for development. As a second language pair, we evaluate our systems on a Korean–English dataset3 with around 90 000 parallel sentences of training data, 1000 for development, and 2000 for testing. For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30 000 merge operations, shared between German and English, and independently for Korean→English. 2Implementation released in Nematus: https://github.com/EdinburghNLP/nematus 3https://sites.google.com/site/ koreanparalleldata/ subword vocabulary sentences words (EN) DE/KO EN DE→EN 159 000 3 220 000 18 870 13 830 80 000 1 610 000 9850 7740 40 000 810 000 7470 5950 20 000 400 000 5640 4530 10 000 200 000 3760 3110 5000 100 000 2380 1990 KO→EN 94 000 2 300 000 32 082 16 006 Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE→EN data, and for KO→EN data. To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see 3.2). Table 1 shows statistics for each subcorpus, including the subword vocabulary. Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU (Papineni et al., 2002; Post, 2018).4 Like Ranzato et al. (2016), we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012). 4.2 PBSMT Baseline We use Moses (Koehn et al., 2007) to train a PBSMT system. We use MGIZA (Gao and Vogel, 2008) to train word alignments, and lmplz (Heafield et al., 2013) for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA (Cherry and Foster, 2012) – we perform multiple runs where indicated. Unlike Koehn and Knowles (2017), we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see 2.2). 214 BLEU ID system 100k 3.2M 1 phrase-based SMT 15.87 ± 0.19 26.60 ± 0.00 2 NMT baseline 0.00 ± 0.00 25.70 ± 0.33 3 2 + ”mainstream improvements” (dropout, tied embeddings, 7.20 ± 0.62 31.93 ± 0.05 layer normalization, bideep RNN, label smoothing) 4 3 + reduce BPE vocabulary (14k →2k symbols) 12.10 ± 0.16 5 4 + reduce batch size (4k →1k tokens) 12.40 ± 0.08 31.97 ± 0.26 6 5 + lexical model 13.03 ± 0.49 31.80 ± 0.22 7 5 + aggressive (word) dropout 15.87 ± 0.09 33.60 ± 0.14 8 7 + other hyperparameter tuning (learning rate, 16.57 ± 0.26 32.80 ± 0.08 model depth, label smoothing rate) 9 8 + lexical model 16.10 ± 0.29 33.30 ± 0.08 Table 2: German→English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported. 105 106 0 10 20 30 32.8 30.8 28.7 24.4 20.6 16.6 26.6 24.9 23 20.5 18.3 16 25.7 18.5 11.6 1.8 1.3 0 corpus size (English words) BLEU neural MT optimized phrase-based SMT neural MT baseline Figure 2: German→English learning curve, showing BLEU as a function of the amount of parallel training data, for PBSMT and NMT. 4.3 NMT Systems We train neural systems with Nematus (Sennrich et al., 2017b). Our baseline mostly follows the settings in (Koehn and Knowles, 2017); we use adam (Kingma and Ba, 2015) and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work). We subsequently add the methods described in section 3, namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch 4Signature BLEU+c.mixed+#.1+s.exp+tok.13a+v.1.3.2. size, model depth, regularization parameters and learning rate. Detailed hyperparameters are reported in Appendix A. 5 Results Table 2 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our ”mainstream improvements” add around 6–7 BLEU in both data conditions. In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout6 (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2→16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9→32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) 5beam search results reported by Wiseman and Rush (2016). 6p = 0.3 for dropping words; p = 0.5 for other dropout. 215 system BLEU MIXER (Ranzato et al., 2016)5 21.8 BSO (Wiseman and Rush, 2016) 25.5 NPMT+LM (Huang et al., 2018) 30.1 MRT (Edunov et al., 2018) 32.84 ± 0.08 Pervasive Attention (Elbayad et al., 2018) 33.8 Transformer Baseline (Wu et al., 2019) 34.4 Dynamic Convolution (Wu et al., 2019) 35.2 our PBSMT (1) 28.19 ± 0.01 our NMT baseline (2) 27.16 ± 0.38 our NMT best (7) 35.27 ± 0.14 Table 3: Results on full IWSLT14 German→English data on tokenized and lowercased test set with multi-bleu.perl. system BLEU (Gu et al., 2018b) 5.97 (supervised Transformer) phrase-based SMT 6.57 ± 0.17 NMT baseline (2) 2.93 ± 0.34 NMT optimized (8) 10.37 ± 0.29 Table 4: Korean→English results. Mean and standard deviation of three training runs reported. to other data conditions, and Korean→English, for simplicity. For a comparison with PBSMT, and across different data settings, consider Figure 2, which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by Koehn and Knowles (2017). However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix B. For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table 3. Our results far outperform the RNN-based results reported by Wiseman and Rush (2016), and are on par with the best reported results on this dataset. Table 4 shows results for Korean→English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by Gu et al. (2018b). 6 Conclusions Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semisupervised workflows, for instance for the backtranslation of monolingual data. Acknowledgments Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212 169888). Biao Zhang acknowledges the support of the Baidu Scholarship. 216 References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised Statistical Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642, Brussels, Belgium. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised Neural Machine Translation. In International Conference on Learning Representations. Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer Normalization. CoRR, abs/1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Representations (ICLR). Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT Evaluation Campaign, IWSLT 2014. In Proceedings of the 11th Workshop on Spoken Language Translation, pages 2–16, Lake Tahoe, CA, USA. Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A Teacher-Student Framework for ZeroResource Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1925–1935, Vancouver, Canada. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. SemiSupervised Learning for Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1965–1974, Berlin, Germany. Colin Cherry and George Foster. 2012. Batch Tuning Strategies for Statistical Machine Translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 427–436, Montreal, Canada. Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting Character-Based Neural Machine Translation with Capacity and Compression. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4295–4305, Brussels, Belgium. Anna Currey, Antonio Valerio Miceli Barone, and Kenneth Heafield. 2017. Copied Monolingual Data Improves Low-Resource Neural Machine Translation. In Proceedings of the Second Conference on Machine Translation, pages 148–156, Copenhagen, Denmark. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 355–364, New Orleans, Louisiana. Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2018. Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 97–107, Brussels, Belgium. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems 29, pages 1019–1027. Qin Gao and Stephan Vogel. 2008. Parallel Implementations of Word Alignment Tool. In Software Engineering, Testing, and Quality Assurance for Natural Language Processing, pages 49–57, Columbus, Ohio. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018a. Universal Neural Machine Translation for Extremely Low Resource Languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 344–354, New Orleans, Louisiana. Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018b. Meta-Learning for LowResource Neural Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622–3631, Brussels, Belgium. C¸ aglar G¨ulc¸ehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Lo¨ıc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On Using Monolingual Corpora in Neural Machine Translation. CoRR, abs/1503.03535. Barry Haddow, Nikolay Bogoychev, Denis Emelin, Ulrich Germann, Roman Grundkiewicz, Kenneth Heafield, Antonio Valerio Miceli Barone, and Rico Sennrich. 2018. The University of Edinburgh’s Submissions to the WMT18 News Translation Task. In Proceedings of the Third Conference on Machine Translation, pages 403–413, Belgium, Brussels. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual Learning for Machine Translation. In Advances in Neural Information Processing Systems 29, pages 820–828. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable Modified 217 Kneser-Ney Language Model Estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 690–696, Sofia, Bulgaria. Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, and Li Deng. 2018. Towards Neural Phrasebased Machine Translation. In International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In The International Conference on Learning Representations, San Diego, California, USA. Tom Kocmi and Ondˇrej Bojar. 2018. Trivial Transfer Learning for Low-Resource Neural Machine Translation. In Proceedings of the Third Conference on Machine Translation, pages 244–252, Belgium, Brussels. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Philipp Koehn and Rebecca Knowles. 2017. Six Challenges for Neural Machine Translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised Machine Translation Using Monolingual Corpora Only. In International Conference on Learning Representations. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-Based & Neural Unsupervised Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5039–5049, Brussels, Belgium. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task Sequence to Sequence Learning. In The International Conference on Learning Representations. Antonio Valerio Miceli Barone, Jindˇrich Helcl, Rico Sennrich, Barry Haddow, and Alexandra Birch. 2017. Deep Architectures for Neural Machine Translation. In Proceedings of the Second Conference on Machine Translation, Volume 1: Research Papers, Copenhagen, Denmark. Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, and Satoshi Nakamura. 2017. An Empirical Study of MiniBatch Creation Strategies for Neural Machine Translation. In The First Workshop on Neural Machine Translation (NMT), pages 61–68, Vancouver, Canada. Masato Neishi, Jin Sakuma, Satoshi Tohda, Shonosuke Ishiwatari, Naoki Yoshinaga, and Masashi Toyoda. 2017. A Bag of Useful Tricks for Practical Neural Machine Translation: Embedding Layer Initialization and Large Batch Size. In Proceedings of the 4th Workshop on Asian Translation (WAT2017), pages 99–109, Taipei, Taiwan. Graham Neubig and Junjie Hu. 2018. Rapid Adaptation of Neural Machine Translation to New Languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 875–880, Brussels, Belgium. Toan Nguyen and David Chiang. 2018. Improving Lexical Choice in Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 334–343, New Orleans, Louisiana. Toan Q. Nguyen and David Chiang. 2017. Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 296–301, Taipei, Taiwan. Robert ¨Ostling and J¨org Tiedemann. 2017. Neural machine translation for low-resource languages. CoRR, abs/1708.05729. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318, Philadelphia, PA. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Ofir Press and Lior Wolf. 2017. Using the Output Embedding to Improve Language Models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Valencia, Spain. Prajit Ramachandran, Peter Liu, and Quoc Le. 2017. Unsupervised pretraining for sequence to sequence learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 383–391, Copenhagen, Denmark. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence Level Training with Recurrent Neural Networks. In The 218 International Conference on Learning Representations. Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, Antonio Valerio Miceli Barone, and Philip Williams. 2017a. The University of Edinburgh’s Neural MT Systems for WMT17. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, Copenhagen, Denmark. Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L¨aubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017b. Nematus: a Toolkit for Neural Machine Translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 65–68, Valencia, Spain. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh Neural Machine Translation Systems for WMT 16. In Proceedings of the First Conference on Machine Translation, Volume 2: Shared Task Papers, pages 368–373, Berlin, Germany. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016c. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Anders Søgaard, Sebastian Ruder, and Ivan Vulic. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788, Melbourne, Australia. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15:1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3104–3112, Montreal, Quebec, Canada. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Z. Wojna. 2016. Rethinking the Inception Architecture for Computer Vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1296–1306, Austin, Texas. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer Learning for Low-Resource Neural Machine Translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568–1575, Austin, Texas. 219 A Hyperparameters Table 5 lists hyperparameters used for the different experiments in the ablation study (Table 2). Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1). B Sample Translations Table 6 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten (‘bloodstained’) or Spaniern (‘Spaniards’, ‘Spanish’), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns (’that’, ’which’, ’who’), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a moreor-less fluent, but semantically inadequate translation: erobert (’conquered’) is translated into doing, and richtig aufgezeichnet (’registered correctly’, ‘recorded correctly’) into really the first thing. 220 system hyperparameter 2 3 5 6 7 8 9 hidden layer size 1024 embedding size 512 encoder depth 1 2 1 encoder recurrence transition depth 1 2 decoder depth 1 2 1 dec. recurrence transition depth (base) 2 4 2 dec. recurrence transition depth (high) 2 tie decoder embeddings yes layer normalization yes lexical model yes yes hidden dropout 0.2 0.5 embedding dropout 0.2 0.5 source word dropout 0.1 0.3 target word dropout 0.3 label smoothing 0.1 0.2 maximum sentence length 200 minibatch size (# tokens) 4000 1000 learning rate 0.0001 0.0005 optimizer adam early stopping patience 10 validation interval: IWSLT 100k / 200k / 400k 50 100 400 IWSLT ≥800k / KO-EN 2.3M 1000 2000 8000 beam size 5 Table 5: Configurations of NMT systems reported in Table 2. Empty fields indicate that hyperparameter was unchanged compared to previous systems. 221 source In einem blutbefleckten Kontinent, waren diese Menschen die einzigen, die nie von den Spaniern erobert wurden. reference In a bloodstained continent, these people alone were never conquered by the Spanish. PBSMT 100k In a blutbefleckten continent, were these people the only, the never of the Spaniern erobert were. PBSMT 3.2M In a blutbefleckten continent, these people were the only ones that were never of the Spaniern conquered. NMT 3.2M (baseline) In a blinging tree continent, these people were the only ones that never had been conquered by the Spanians. NMT 100k (optimized) In a blue-flect continent, these people were the only one that has never been doing by the spaniers. NMT 3.2M (optimized) In a bleed continent, these people were the only ones who had never been conquered by the Spanians. source Dies ist tatschlich ein Poster von Notre Dame, das richtig aufgezeichnet wurde. reference This is actually a poster of Notre Dame that registered correctly. PBSMT 100k This is actually poster of Notre lady, the right aufgezeichnet was. PBSMT 3.2M This is actually a poster of Notre Dame, the right recorded. NMT 3.2M (baseline) This is actually a poster of emergency lady who was just recorded properly. NMT 100k (optimized) This is actually a poster of Notre Dame, that was really the first thing. NMT 3.2M (optimized) This is actually a poster from Notre Dame, which has been recorded right. Table 6: German→English translation examples with phrase-based SMT and NMT systems trained on 100k/3.2M words of parallel data.
2019
21
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2190–2196 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2190 Keep Meeting Summaries on Topic: Abstractive Multi-Modal Meeting Summarization Manling Li1, Lingyu Zhang2, Heng Ji1, Richard J. Radke2 1 Department of Computer Science 2 Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute 1{lim22,jih}@rpi.edu, 2{[email protected], [email protected]} Abstract Transcripts of natural, multi-person meetings differ significantly from documents like news articles, which can make Natural Language Generation models generate unfocused summaries. We develop an abstractive meeting summarizer from both videos and audios of meeting recordings. Specifically, we propose a multi-modal hierarchical attention mechanism across three levels: topic segment, utterance and word. To narrow down the focus into topically-relevant segments, we jointly model topic segmentation and summarization. In addition to traditional textual features, we introduce new multi-modal features derived from visual focus of attention, based on the assumption that an utterance is more important if its speaker receives more attention. Experiments show that our model significantly outperforms the state-of-the-art with both BLEU and ROUGE measures. 1 Introduction Automatic meeting summarization is valuable, especially if it takes advantage of multi-modal sensing of the meeting environment, such as microphones to capture speech and cameras to capture each participant’s head pose and eye gaze. Traditional extractive summarization methods based on selecting and reordering salient words tend to produce summaries that are not natural and incoherent. Although state-of-the-art work (Shang et al., 2018) employs WordNet (Miller, 1995) to make summaries more abstractive, the quality is still far from those produced by humans, as shown in Table 1. Moreover, these methods tend to have limited content coverage by selecting salient words. On the other hand, recent years have witnessed the success of Natural Language Generation (NLG) models to generate abstractive summaries. Since human-written summaries tend to mention the exact given keywords without paraphrasing, the copy mechanism proposed by a Pointer Generator Network (PGN) (See et al., 2017) naturally fits this task. Apart from generating words from a fixed vocabulary, it also copies the words from the input. However, transcripts of multi-person meetings widely differ from traditional documents. Instead of grammatical, wellsegmented sentences, the input is often composed of ill-formed utterances. Therefore, NLG models can easily lose focus. For example, in Table 1, PGN fails to capture the keywords remote control, trendy and user-friendly. Therefore, we propose a multi-modal hierarchical attention mechanism across topic segments, utterances, and words. We learn topic segmentation as an auxiliary task and limit the attention within each segment. Our approach mimics human summarization methods by segmenting first and then summarizing each segment. To locate key utterances, we propose that the rich multi-modal data from recording the meeting environment, especially cameras facing each participant, can provide speaker interaction and participant feedback to discover salient utterances. One typical interaction is Visual Focus Of Attention (VFOA), i.e., the target that each participant looks at in every timestamp. Possible VFOA targets include other participants, the table, etc. We estimate VFOA based on each participant’s head orientation and eye gaze. The longer the speaker is paid attention by others, the higher possibility that the utterance is important. For example, in Table 1, the high VFOA received by the speaker for the last two sentences assists in maintaining the bold keywords. 2 Method As shown in Figure 1, our meeting data consists of synchronized videos of each participant in a 2191 Um I'm Sarah, the Project Managerand this is our first meeting, surprisingly enough.   Okay, this is our agenda, um  we will do some stuff , get to know each other a bit better to feel more comfortable with each other .  Um then we'll go do tool training, talk about the project plan, discuss our own ideas and everything um and we've got twenty five minutes to do that,  as far as I can understand.   Now, we're developing a remote control which you probably already know. Um, we want it to be original,  something that's uh people haven't thought of, that's not out in the shops, um, trendy, appealing to a wide market, but you know, not a hunk of metal, and user­friendly, grannies to kids, maybe even pooches should be able to use it. Transcript Manual summary The project manager gave an introduction to the goal of the project , to create a trendy yet user­friendly remote. Extractive summary (Shang et al., 2018) Abstractive summary (See et al., 2017) Our Approach hunk of metal and user­friendly granny's to kids.  The project manager opened the meeting and introduced the upcoming project to the team members.  The project manager opens the meeting. The project manager states the goal of the project, which is to develop a remote control. It should be original, trendy, and user­friendly. UI ID ME PM UI ID ME PM UI ID ME PM Received   VFOA Table 1: Comparison of Human and System Generated Summaries. The color indicates the attention received by the speaker PM (Project Manager). Others: ME (Marketing Expert), ID (Industrial Designer), UI (User Interface). Figure 1: Multi-modal Meeting Summarization Framework group meeting, as well as a time-stamped transcript of the utterances generated by Automatic Speech Recognition (ASR) tools 1. We formulate a meeting transcript as a list of triples X = {(pi, fi, ui)}. pi ∈P is the the speaker of utterance ui, where P denotes the set of participants. fi contains the VFOA target sequence over the course of utterance ui for each participant. Each utterance ui is a sequence of words 1For example, IBM Watson’s Speech to Text System (https://www.ibm.com/watson/services/ speech-to-text/) ui = {wi 0, wi 1, . . . }. The output of our model is a summary Y and the segment ending boundaries S. The training instances for the generator are provided in the form of Ttrain = {(X, Y, S)}, and the testing instances only contain the transcripts Ttest = {X}. 2.1 Visual Focus of Attention Estimation Given the recording video of each individual, we estimate VFOA based on each participant’s head orientation and eye gaze for every frame. The VFOA targets include F = {p0, . . . , p|P|, table, whiteboard, projection screen and unknown}. As OpenFace Feature Extractor Output Eye Gaze Direction Head Pose   Angle VFOA Target Detector  Figure 2: VFOA Detector Framework illustrated in Figure 2, we feed each input color image into the OpenFace tool (Baltrusaitis et al., 2018) to estimate the head pose angle (roll, pitch and yaw) and the eye gaze direction vector (az2192 imuth and elevation), and concatenate them into a 5-dimensional feature vector. To obtain the actual visual targets from the head pose and eye gaze estimation, we build a seven-layer network to output a one-hot vector, which indicates the most possible visual target at the current frame, and each dimension stands for a VFOA target. The network is trained on the VFOA annotation, including the VFOA target for each frame of each participant. Then the output of all participants are concatenated. For utterance ui, the VFOA vector f i ∈ R|P|∗|F| is the sum of each frame’s VFOA outputs over the course of ui, where each dimension stands for the total duration of the attention paid to the corresponding VFOA target. 2.2 Meeting Transcript Encoder For an utterance ui = {wi 0, wi 1, . . . }, we embed each word wi j using the pretrained GloVe (Pennington et al., 2014), and apply a bidirectional gated recurrent unit (GRU) (Cho et al., 2014) to obtain the encoded word representation hi j. The utterance representations are the average of words. Additionally, the speaker pi is encoded into a onehot vector pi ∈R|P|. 2.3 Topic Segmentation Decoder We divide the input sequence into contiguous segments based on SegBot (Li et al., 2018). Its decoder takes a starting utterance of a segment as input at each decoding step, and outputs the ending utterance of the segment. Taking Figure 3 as an example, there are 5 utterances in the transcript. The initial starting utterance is u0 with the possible positions from u0 to u4; if u2 is detected as the ending utterance, then u3 is the next starting utterance and is input to the decoder, with possible positions from u3 to u4. We extend SegBot to obtain the distribution over possible positions j ∈{i, i+1, . . . } by using a multi-modal segmentation attention: αseg ij =v⊤ s tanh(Wudi+W hhj+W ppj+W ff j) where di is the decoded utterance of starting utterance ui. Let si denote the ending utterance of the segment that starts with the utterance ui, the probability for uj to be the ending utterance si is: P(si = uj|(pi, f i, ui)) = exp αseg ij P k∈{i,i+1,...} exp αseg ik , Figure 3: Topic Segmentation Decoder 2.4 Meeting Summarization Decoder We build our decoder based on Pointer-Generator Network (PGN) (See et al., 2017) to copy words from the input transcript in terms of attention distribution. Different from PGN, we introduce a hierarchical attention mechanism based on the topic segmentation results, as shown in Figure 4. Figure 4: Hierarchical Attention in Summary Decoder As VFOA has close ties to salient utterances, we use the VFOA received by speaker f ⊤ k p ′ k to capture the importance of utterance uk, where p ′ k is the a vector indicating which dimension’s VFOA target is the speaker pk. Formally, we use a GRU to obtain the decoded hidden states di for the ith input word. The Utterance2Word attention on the word wj of the utterance uk is: eij =v⊤ 1 tanh(W d1di+W wwj+W ppj+W ff j) The context representation for the utterance uk is uik = Softmax(eij)wj, wj ∈uk. The Segment2Utterance attention on the utterance uk in the input transcript is: e ′ ik =f ⊤ k p ′ k  v⊤ 2 tanh (W d2di + W uuik)  . 2193 Model ROUGE BLEU ROUGE 1 ROUGE 2 ROUGE L BLEU 1 BLEU 2 BLEU 3 BLEU 4 CoreRank (Shang et al., 2018) 37.86 7.84 13.72 17.17 6.78 1.77 0.00 PGN (See et al., 2017) 36.75 10.48 23.81 37.89 23.41 12.84 6.92 Our Approach (TopicSeg+VFOA) 53.29 13.51 26.90 40.98 26.19 13.76 8.03 Our Approach (TopicSeg) 51.53 12.23 25.47 39.67 24.91 12.37 7.86 Table 2: Comparison on AMI datasets The context representation for segment sq is ciq = Softmax(e ′ ik)uk, uk ∈sq. The Meeting2Segment attention is: e ′′ iq =v⊤ 3 tanh (W d3di + W sciq). The hierarchical attention of wj is calculated within the utterance uk and then segment sq: αsum ij = exp  eije ′ ike ′′ iq  P j∈sq exp  eije ′ ike ′′ iq , The probability of generating yi follows the decoder in PGN (See et al., 2017), and αsum ij is the attention in the decoder for copying words from the input sequence. 2.5 Joint End-to-End Training The summarization task and the topic segmentation task are trained jointly with the loss function: L = −log P(Y, S|X) = X yi∈Y −log P(yi|X)+ X sj∈S −log P(sj|(pj,f j,uj)) where P(Y, S|X) is the conditional probability of the summary Y and the segments S given the input meeting transcript X = {(pi, fi, ui)}. Here, yi is one token in the ground truth summary, and sj denotes the ending boundary of the segment that starts with uj. 3 Experiments Our experiments are conducted on the widely used AMI Meeting Corpus (Carletta et al., 2005). This corpus is about a remote control design project from kick-off to completion. Each meeting lasts 30 minutes and contains four participants: a project manager, a marketing expert, an industrial designer, and a user interface designer. We follow the conventional approach (Shang et al., 2018) in the meeting analysis literature to preprocess and divide the dataset into training (97 meetings), development (20 meetings) and test sets (20 meetings). One meeting in the test set does not provide videos and thus it is ignored. The ASR transcripts are provided in the dataset (Garner et al., 2009), which are manually revised based on the automatically generated ASR output. Each meeting has a summary containing about 300 words and 10 sentences. Each meeting is also divided into multiple segments focusing on various topics. The ASR transcripts and the videos recorded for all participants are the input of the model. We use manual annotation of summaries and topic segments for training, while they are generated automatically during testing. The VFOA estimation model is trained separately on the VFOA annotation of 14 meetings in the dataset, and achieve 64.5% prediction accuracy. The baselines include: (1) state-of-the-art extractive summarization method CoreRank (Shang et al., 2018), and (2) neural network based generation model PGN (See et al., 2017). We adopt two standard metrics ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) for evaluation. Additionally, to show the impact of VFOA, we remove the VFOA features as an additional baseline, and conduct significance testing. By T-test, the differences on ROUGE and BLEU are considered to be statistically significant (P value ≤0.09), except BLEU 4 (P value = 0.27). Compared to the abstractive method PGN in Table 2, the multimodal summarizer achieves larger improvement on ROUGE than BLEU. It demonstrates our approach’s ability to focus on topically related words. For example, ‘The marketing expert discussed his findings from trend watching reports, stressing the need for a product that has a fancy look and feel, is technologically innovative...’ is generated by our model, while the PGN generates ‘the marketing expert discussed his findings from trend watching reports’. The speaker receives higher VFOA from participants while mentioning the utterances containing these keywords. To demonstrate the effectiveness of VFOA attention, we rank the utterances in terms of VFOA, and achieve 45.8% accuracy of selecting salient utterances based on the annotation of 2194 (Shang et al., 2018)2. Therefore, the model learns that when the speaker receives higher VFOA, the utterances of that speaker is more important. Moreover, topic segmentation also contributes to the better coverage of salient words, which is demonstrated by the improvement on ROUGE metrics of the model without VFOA features. Each meeting is divided to six to ten segments, with special focuses on topics such as ‘openings’, ‘trend watching’, ‘project budget’ and ‘user target group’. With the topic segmentation results, the utterances within the same segment are more correlated, and topically related words tend to be frequently mentioned. For example, ‘fancy look’ is more important within the ‘trend watching’ segment than the whole transcript. The VFOA distribution is highly correlated to topic segmentation. For example, the project manager pays more attention to the user interface designer in ‘trend watching’ segment, while focuses more on the marketing expert in another segment about ‘project budget’. Therefore, the VFOA feature not only benefits the summarization decoder, but also improves the performance of topic segmentation. The topic segmentation accuracy is 57.74% without VFOA feature, and 60.11% with VFOA feature in segmentation attention. Compared to the extractive method CoreRank in Table 2, our BLEU scores are doubled, which demonstrate that the abstractive summaries are more coherent and natural. For example, the extractive summaries are often incomplete sentences, such as ‘prefer a design where the remote control and the docking station’. But the abstractive summaries are well-organized sentences, such as ‘The remote will use a conventional battery and a docking station which recharges the battery’. Also, the improvement on ROUGE 2 and ROUGE L is larger than ROUGE 1, which shows the superiority of abstractive methods to maintain longer terms, such as corporate website, etc. 4 Related Work Extractive summarization methods rank and select words by constructing word co-occurrence graphs (Mihalcea and Tarau, 2004; Erkan and Radev, 2004; Lin and Bilmes, 2010; Tixier et al., 2016b), and they are applied to meeting summarization (Liu et al., 2009, 2011; Tixier et al., 2https://bitbucket.org/dascim/acl2018_ abssumm/src/master/data/meeting/ami 2016a; Shang et al., 2018). However, extractive summaries are often not natural and coherent with limited content coverage. Recently the neural natural language generation models boost the performance of abstractive summarization (Luong et al., 2015; Rush et al., 2015; See et al., 2017), but they are often unable to focus on topic words. Inspired by utterance clustering in extractive methods (Shang et al., 2018), we propose a hierarchical attention based on topic segmentation (Li et al., 2018). Moreover, our hierarchical attention is multi-modal to narrow down the focus by capturing participant interactions. Multi-modal features from human annotations have been proven effective at improving summarization, such as dialogue act (Goo and Chen, 2018). Instead of using human annotations, our approach utilizes a simply detectable multi-modal feature VFOA. 5 Conclusions and Future Work We develop a multi-modal summarizer to generate natural language summaries for multi-person meetings. We present a multi-modal hierarchical attention mechanism based on VFOA estimation and topic segmentation, and the experiments demonstrate its effectiveness. In the future, we plan to further integrate higher level participant interactions, such as gestures, face expressions, etc. We also plan to construct a larger multimedia meeting summarization corpus to cover more diverse scenarios, building on our previous work (Bhattacharya et al., 2019). Acknowledgments This material is based upon work supported by the U.S. National Science Foundation under Grant No. IIP-1631674, DARPA AIDA Program No. FA8750-18-2-0014, ARL NS-CTA No. W911NF-09-2-0053, and Tencent AI Lab Rhino-Bird Gift Fund. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. 2195 References Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. Openface 2.0: Facial behavior analysis toolkit. 2018 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2018), pages 59–66. Indrani Bhattacharya, Michael Foley, Christine Ku, Ni Zhang, Tongtao Zhang, Cameron Mine, Manling Li, Heng Ji, Christoph Riedl, Brooke Foucault Welles, and Richard J. Radke. 2019. The unobtrusive group interaction (UGI) corpus. In 10th ACM Multimedia Systems Conference (MMSys 2019). Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, et al. 2005. The AMI meeting corpus: A pre-announcement. In International Workshop on Machine Learning for Multimodal Interaction, pages 28–39. Springer. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457–479. Philip N Garner, John Dines, Thomas Hain, Asmaa El Hannani, Martin Karafiat, Danil Korchagin, Mike Lincoln, Vincent Wan, and Le Zhang. 2009. Realtime ASR from meetings. In Tenth Annual Conference of the International Speech Communication Association. Chih-Wen Goo and Yun-Nung Chen. 2018. Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts. arXiv preprint arXiv:1809.05715. Jing Li, Aixin Sun, and Shafiq Joty. 2018. Segbot: A generic neural text segmentation model with pointer network. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI). Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Hui Lin and Jeff Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 912–920. Fei Liu, Feifan Liu, and Yang Liu. 2011. A supervised framework for keyword extraction from meeting transcripts. IEEE Transactions on Audio, Speech, and Language Processing, 19(3):538–548. Feifan Liu, Deana Pennell, Fei Liu, and Yang Liu. 2009. Unsupervised approaches for automatic keyword extraction using meeting transcripts. In Proceedings of human language technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 620–628. Association for Computational Linguistics. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. George A Miller. 1995. Wordnet: a lexical database for English. Communications of the ACM, 38(11):39– 41. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073–1083. Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Jean-Pierre Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, and Jean-Pierre Lorr´e. 2018. Unsupervised abstractive meeting summarization with multi-sentence compression and budgeted submodular maximization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1. 2196 Antoine Tixier, Fragkiskos Malliaros, and Michalis Vazirgiannis. 2016a. A graph degeneracy-based approach to keyword extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1860–1870. Antoine Tixier, Konstantinos Skianis, and Michalis Vazirgiannis. 2016b. Gowvis: a web application for graph-of-words-based text visualization and summarization. Proceedings of ACL-2016 System Demonstrations, pages 151–156.
2019
210
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2197–2203 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2197 Adversarial Domain Adaptation Using Artificial Titles for Abstractive Title Generation Francine Chen FX Palo Alto Laboratory Palo Alto, CA [email protected] Yan-Ying Chen FX Palo Alto Laboratory Palo Alto, CA [email protected] Abstract A common issue in training a deep learning, abstractive summarization model is lack of a large set of training summaries. This paper examines techniques for adapting from a labeled source domain to an unlabeled target domain in the context of an encoder-decoder model for text generation. In addition to adversarial domain adaptation (ADA), we introduce the use of artificial titles and sequential training to capture the grammatical style of the unlabeled target domain. Evaluation on adapting to/from news articles and Stack Exchange posts indicates that the use of these techniques can boost performance for both unsupervised adaptation as well as fine-tuning with limited target data. 1 Introduction Many types of textual content, such as conversations and posts on chat, do not have a title or summary. While multi-sentence extractive summarization can give a sense of the content of an article, a title or highlight is more concise. Such short summaries can be generated using abstractive summarization with an RNN encoder-decoder model, e.g., (Nallapati et al., 2016). A common issue when training models for abstractive summarization of conversations and posts is the lack of a large set of text with summaries. Obtaining good quality labeled data can be difficult and expensive, especially if authorgenerated summaries are desired. One option is to train on data from another domain with authorgenerated titles, but because of differences between domains, the performance may be less than adequate. These differences include different vocabularies, different grammatical styles, and different ways of expressing similar concepts. Vocabulary expansion may be used to address the different vocabularies in source and target domains, and adversarial domain adaptation (ADA) may be used to merge the embedded feature representations across domains. However, ADA does not adapt the decoder in an encoder-decoder generation model. In this paper, we investigate the utility of these techniques in unsupervised domain adaptation for title generation. We also examine the use of a limited amount of labeled training data from the target domain, when high performance may be required but training data is not easily available. Our contributions include (1) proposing the use of artificial titles for unlabeled target documents to train a decoder to learn the grammatical style of titles in the new domain (2) proposing to train the decoder in a sequence of steps that encourages the source and target embedding spaces to remain aligned during adaptation, and (3) showing that our model improves performance over ADA and an expanded vocabulary alone and further, that a limited amount of labeled target data can achieve performance close to training on all labeled target data. 2 Related Work Our model draws from work on abstractive summarization and unsupervised domain adaptation. Recently, a number of neural encoder-decoder models have been proposed for abstractive summarization e.g., (Rush et al., 2015; Chen et al., 2016a; Nallapati et al., 2016; Chopra et al., 2016; Li et al., 2017; Narayan et al., 2018; Hsu et al., 2018), with one of the better performing models being (See et al., 2017), which serves as our base model. Supervised domain adaptation methods have been proposed for generative models. (Hua and Wang, 2017) found that pre-training an abstractive summarizer with extractive summaries does not always improve performance, but (Chen et al., 2015) noted that fine-tuning a model trained 2198 Figure 1: Encoder-decoder RNN model for text generation with a classifier for adversarial domain adaptation of the encoded representations (concepts) to an unlabeled target domain. Gradient reversal of Ld from the domain classifier to the encoder is indicated. The blue/red articles represent source/target domain data. on source domain data with limited target domain data does improve performance. A variety of techniques have been proposed for unsupervised domain adaptation of deep learning systems for classification, e.g., (Hsu et al., 2017; Tzeng et al., 2017; Ganin et al., 2016; Chen et al., 2016b; Ghifary et al., 2016). However, all used the aligned encoder representation for classification but not generation. We adapt the domain-adversarial method for feature alignment in an encoder proposed by (Ganin et al., 2016). However, for text generation, a domain-independent representation from the encoder, as used in domain adaptation for classification, is not adequate. We also require the decoder to be adapted to varying domains to generate output appropriate for the target domain, an issue that we investigate in the context of title generation. Jointly training a translation model with mixed labeled data from two domains can improve performance over training on one domain only (Pryzant et al., 2017). In contrast, our domain adaptation method trains sequentially on data, first with the unlabeled target domain data. 3 Domain-Adapted Title Generation Our goal is to improve performance when labeled data from one domain, the source, is used to train a model which is then applied to another domain with no or only limited labeled data, the target. 3.1 Adversarial Domain Adaptation (ADA) The embedded representation generated by the encoder, which represents the “concepts” in the input text, may differ across domains. To address this, we adapt the method proposed by (Ganin et al., 2016), which uses a domain classifier to force the concept representations to align across domains. We use an encoder-decoder RNN model with domain adaptation (Figure 1) for title generation. Labeled source data is fed to the encoder and the decoder learns to generate summary titles. At the same time, the source data and unlabeled target domain data are encoded by a bidirectional LSTM as their concept representations, and the domain classifier tries to learn to differentiate between the representations of two domains. The domain classifier has two dense, 100-unit hidden layers followed by a softmax. The concept representation vector is computed as the bidirectional LSTM encoder’s final forward and backward hidden states concatenated into a single state. During training, the gradient from the domain classifier, ∂Ld ∂θd , is “reversed” to be negative before being propagated back through the encoder as −∂Ld ∂θc , encouraging the embedded representations to align by adjusting the feature distributions to maximize the loss of the domain classifier. In contrast to the two classification losses used by (Ganin et al., 2016) for training the model, we use the generated sequence loss together with the adversarial domain classifier loss: loss = 1 T T X t=0 Ly(t) −λLd (1) where, following (See et al., 2017), the decoder (sequence) loss Ly(t) = −logP(w∗ t ) (2) is the negative log likelihood of the target word w∗ t at position t. The domain classifier loss, Ld, is the cross-entropy loss between the predicted and true domain label probabilities, Ld = d · logP( ˆd) + (1 −d) · log(1 −P( ˆd)). (3) λ is a parameter relating the two losses. We followed the schedule from (Ganin et al., 2016) for adjusting λ for the encoder: λp = 2 1 + exp(−10p) −1 (4) λ was increased from 0.0 to 1.0 by increasing p from 0.0 to 1.0 over 5000 iterations, at which point we observed that the domain adaptation classifier loss was reaching an asymptote. λ was then held equal to 1.0 and training continued until validation performance for title generation reached an asymptote (when training on artificial titles or source data) or overtraining occurred (when training on limited target data). When updating the domain classifier, λ was set equal to one. 2199 Figure 2: Flowchart for training a model for an unlabeled target domain with artificial targets. 3.2 Artificial Titles The style of the unlabeled target may be different from the source, e.g., Stack Exchange is more casual and includes more slang than news articles. To capture the style of the unlabeled target, “artificial” titles were synthesized. Since titles tend to be short and encode-decoder models learn to model sentence length, target text between 4-10 words in length were selected. A common summary baseline is the first few sentences of a news article e.g. (Zajic et al., 2004; Nallapati et al., 2016); some social media sites, including Trip Advisor, Facebook and Reddit, display the first words of long posts. For example, this paragraph might be shown as ”The style of the unlabeled target may ...”. The first text meeting the length requirement was selected 90% of the time and the second text meeting the requirement selected otherwise. For Stack Exchange, the text was a sentence from a post, and for news, where titles are often phrases, the text was a clause. Training on first text only, the loss dropped below 0.001 in less than 3k iterations, indicating the model had learned to copy from the first sentence. Use of the second text discourages this so that both the encoder and decoder are trained on text from the target domain (enabling use of an expanded, joint vocabulary trained on both source and target) to learn its style and vocabulary. However, the artificial titles will generally be different from the real titles, which may lead to lower summarization performance. 3.3 Sequential Training Our adaptation method, ASADA, is shown in Figure 2: a) A model with a joint vocabulary is first pre-trained on artificial titles for the unlabeled target domain (Section 3.2). b) The embedding space of the pre-trained model is then adapted to the source domain using ADA (Section 3.1) to continue training on the target domain with the source domain as the auxiliary adaptation data. c) With a joint embedding space defined, the model is trained on the source domain, which has title-text pairs, and the unlabeled target domain is used as the auxiliary adaptation data to keep the model dataset type use # train summary length samples mean std dev StackEx artif. Tart 398k 11.3 5.4 filt-10 S,F 140k 6.5 1.4 News artif. Tart 287k 7.7 1.5 filt-10 F 31k 9.0 1.4 filt-14 S 168k 11.9 1.8 Table 1: Statistics of the Stack Exchange and News datasets. Tart: artificial Target; S: Source; F: finetuning; filt-X: filtered for at most length X. embedding aligned with the target data. 4 Dataset We used data from two domains: the public CNN/Dailymail (News) dataset used by (See et al., 2017) and posts from 20 Stack Exchange (StackEx) channels1 with a bias towards those that are business related (see Appendix A for details). To reduce training time, each article was truncated to 200 words. We limited the data to those with title lengths of 10 words or less for use in finetuning because some were longer sentences rather than titles. (See Table 1) The News datasets were formatted as in (See et al., 2017). The StackEx dataset was randomly divided into train (90%), validation (5%) and test (5%). 5 Experiments For all experiments, the Pointer-Generator model (Gulcehre et al., 2016) by (See et al., 2017) was used without coverage as our base model, since coverage is an additional training step that would add an additional variable to the comparisons. Although coverage improves performance by reducing repetitive words, we chose to examine the effects of different domain adaptation methods without it. For handling differences in vocabulary, the vocabulary of the labeled source and unlabeled target domains were combined. The union of the 50k most frequent terms from the training data of each domain produced a joint vocabulary of about 85k terms. When an individual vocabulary was used, the size was 50k words. When sequential training was used, a model was trained until the loss on a validation set reached an asymptote. Domain adaptation experiments from News to StackEx and from StackEx to News were conducted, first without target domain summary titles and then with a limited amount of target domain titles. 1https://archive.org/details/ stackexchange downloaded 05/26/2017 2200 id reference or description vocab training data and method News →StackEx StackEx →News ROUGE ROUGE 1 2 L 1 2 L (a) See et al. S S 14.22 4.22 12.80 12.92 3.19 12.15 (b) joint vocab S+T S 15.99 4.87 14.42 10.85 2.85 10.23 (c) Ganin et al. (ADA) S+T S, SADA 16.75 5.24 15.10 12.45 3.12 11.53 (d) artif titles S+T Tart 14.28 4.87 13.26 12.02 3.58 11.06 (e) artif titles, ADA S+T Tart, SADA 16.88 5.35 15.24 14.36 3.84 13.47 (f) ASADA S+T Tart, TADA art , SADA 17.78 6.22 16.15 16.75 6.11 15.99 (g) ASADA (lead-1) S+T Tlead1, TADA lead1, SADA 16.46 5.30 15.01 16.16 3.36 14.64 (h) Pryzant et al.(DM) S+T S+Tart 14.63 5.00 13.49 15.13 5.32 14.51 (i) Pryzant et al. (ADM) S+T S+Tart 15.29 5.37 14.06 13.00 4.30 12.01 (j) upper bound T T 31.49 13.70 29.22 23.52 10.92 22.34 Table 2: Title generation performance of domain adaptation from Source S to Target T. (a-c) Baselines. (d-g) Our approaches with artificial titles Tart and with lead-1 Tlead1, respectively. (h) DM: Discriminative Mixing. (i) ADM: Adversarial Discriminative Mixing. (j) Upper bound trained on labeled target data. Training steps are separated by commas. SADA: train on S using ADA. TADA art : train on Tart using ADA. prev curr domains same training training gradually labeled data & data & or jointly data id method method embedded? domain? (E) Tart SADA no no (F1) Tart TADA art yes yes (F2) TADA art SADA yes no Table 3: Comparison of adaptation steps with artificial titles using one step, (E), and two step ASADA, (F1) and (F2). (E) and (F) correspond to the models (e) and (f) in Table 2, respectively. 5.1 Unsupervised Target Domain Adaptation For our investigations on domain adaptation when labeled target domain data is unavailable, models trained on source domain labels only and with a mix of source domain labels and artificial target labels are our baselines. Effect of ADA and Vocabulary The top section of Table 2 shows baseline models trained (a) with the source domain vocabulary [(See et al., 2017)’s approach without coverage] (b) with a joint vocabulary instead of the source domain vocabulary (c) model (b) followed by training using ADA to the target domain [(Ganin et al., 2016)’s approach]. The mixed results using a joint vocabulary reflect the better coverage of the added target words outside the source’s top-50k vocabulary when the source is News vs. StackEx (see Appendix B). And when a joint vocabulary (S+T) is used, ADA (c) improves performance over training only on the source S (b), as expected. Effect of Artificial Titles and Sequential Training The second section of Table 2 compares approaches using artificial titles: (d) Tart: a model pre-trained on target domain articles/posts with artificial target domain titles (e) Tart, SADA: model (d), further trained on the source with ADA to the target without labels. (f) Tart,TADA art ,SADA: ASADA. Model (d), followed by adapting the model, which has been trained on the target domain with non-optimal summaries, to source data, aligning the embedded representations of the two domains. Then the model is trained on source data with ADA to the unlabeled target to learn how to summarize while keeping the embedded representations aligned. (g) ASADA using the lead-1 (first) sentence in place of Tart. The better performance in (f) supports ASADA’a use of artificial titles. ASADA’s two-step adaptation with artificial titles performed best out of all models. The mixed performance of training on Tart indicates the artificial title quality is lower for StackEx, (d) vs. (b). The weakly better performance of (e) over (c) indicates that applying SADA directly forgets much of Tart. The relative improvement of ASADA over training only on source was 25% (from News to StackEx) and 30% (from StackEx to News). This indicates that TADA art allows the model to remember the vocabulary and style from Tart while learning how to summarize by SADA. Table 3 illustrates differences between the onestep adaptation model (e), with id (E) and the twostep adaptation used in ASADA (F1 and F2). In both, the model is first trained on the target domain using Tart. In model (e), ADA then trains the encoder on source only and ignores Tart, gradually giving greater weight to the domain classifier, which uses the target data (see Sec. 3.1). At 2201 Figure 3: Domain adaptation performance with varying amounts of labeled StackEx (left) and News (right) data for fine-tuning with ADA (* DA) and without (* FT). For reference, performance when trained on all labeled target data and no adaptation (* 100%). the same time, the labeled data domain is switched to the source domain, so that both the embedding and decoder domains are abruptly changed. In contrast, in ASADA the embedding is gradually adapted from the target domain to jointly embed the source and target (F1). Only then is the target domain changed (F2). In the third section, the labeled source is mixed with target domain artificial titles and trained using (Pryzant et al., 2017)’s Discriminative Mixed (DM) and Adversarial Discriminative Mixed (ADM) machine translation models. ADM is similar to ADA in that both use and adversarial classifier; however, for ADM both domains have labeled data. ASADA’s better performance indicates that first pre-training with artificial titles to learn vocabulary and style and then adapting to the source to learn to summarize is better than jointly mixing artificial and true titles. 5.2 Limited Target Domain Labels We next examine adaptation performance when a limited amount of labeled data is available for the target domain. Our best model for each domain, ASADA, is refined by training on various percentages of the labeled target domain training data and referred to as ‘* DA’ in Figure 3. For comparison, a baseline model was trained using labeled source domain data and then fine-tuned (Sun et al., 2016; Song et al., 2017) using labeled target domain data and is shown as ‘* FT’. Note that (1) when labeled target domain data is very limited, say 3,000 labeled samples, ‘* DA’ improves performance more than ‘* FT’ (2) as the amount of labeled target data increases, the performance with and without ADA increases, and with 30% of the target data (rightmost points) is close to or exceeds using 100% of the target data. Figure 4: MDS visualizations comparing embeddings of a sample of test text produced by models (d), (e) and (f) in Table 2. artif: model (d). artif,srcADAmid: model (e) midway through ADA. artif,srcADA: trained model (e). ASADA: model (f). Left: News →StackEx. Right: StackEx →News. 5.3 Visualization of Adaptation Models Embedded points produced by models (d), (e) and (f) (see Section 5.1) are compared in the visualization in Figure 4. For the one-step adaptation model, (e), embedded points are shown partway through adaptation with ADA (i.e., p in Eqn. (4) is approximately 0.5) and after adaptation. The embedding partway through adaptation, labeled artif,srcADAmid, has moved away from the Tart embedding (model (d), labeled artif). After adaptation, labeled artif,srcADA, the embedded points are only slightly closer to the Tart embedded points. In contrast, the ASADA (f) embedding is closer to the Tart embedding and more compact, as is Tart. This supports our hypothesis that ASADA retains more of what was learned from the initial target embedding than model (e)’s onestep adaptation, contributing to ASADA’s better performance. 6 Summary We investigated unsupervised domain adaptation methods for an encoder-decoder model. We proposed the use of artificial titles for training a decoder to the target domain vocabulary and style and sequential adversarial domain adaptation to minimize rapid changes of the encoder embedding space. Our experiments show that our proposed approach performed best when compared to baseline adaptation techniques when unsupervised. And with very limited target domain labels for fine-tuning, our model performed better than fine-tuning a model trained on the source domain. In the future, we would like to understand the usefulness of artificial titles for training the decoder relative to other factors that may impact performance, e.g., how similar the true titles or summaries are in the different domains. 2202 References Qian Chen, Xiao-Dan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016a. Distraction-based neural networks for modeling document. In IJCAI, pages 2754–2760. Xie Chen, Tian Tan, Xunying Liu, Pierre Lanchantin, Moquan Wan, Mark JF Gales, and Philip C Woodland. 2015. Recurrent neural network language model adaptation for multi-genre broadcast speech recognition. In Sixteenth Annual Conference of the International Speech Communication Association. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2016b. Adversarial deep averaging networks for cross-lingual sentiment classification. arXiv preprint arXiv:1606.01614. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030. Muhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, David Balduzzi, and Wen Li. 2016. Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision, pages 597–613. Springer. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 140–149. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the 19th International Conference on Computational Linguistics (Long Papers), pages 132–141. Association for Computational Linguistics. Wei-Ning Hsu, Yu Zhang, and James Glass. 2017. Unsupervised domain adaptation for robust speech recognition via variational autoencoder-based data augmentation. In IEEE Automatic Speech Recognition and Understanding Workshop. Xinyu Hua and Lu Wang. 2017. A pilot study of domain adaptation effect for neural abstractive summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 100–106. Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep recurrent generative decoder for abstractive text summarization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2091–2100. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Reid Pryzant, Denny Britz, and Quoc Le. 2017. Effective domain mixing for neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 118–126. Alexander M Rush, SEAS Harvard, Sumit Chopra, and Jason Weston. 2015. A neural attention model for sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073–1083. Xinhang Song, Luis Herranz, and Shuqiang Jiang. 2017. Depth cnns for rgb-d scene recognition: Learning from scratch better than transferring from rgb-cnns. In AAAI, pages 4271–4277. Baochen Sun, Jiashi Feng, and Kate Saenko. 2016. Return of frustratingly easy domain adaptation. In AAAI, pages 2058–2065. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Computer Vision and Pattern Recognition (CVPR), page 4. David Zajic, Bonnie Dorr, and Richard Schwartz. 2004. Bbn/umd at duc-2004: Topiary. In Proceedings of the HLT-NAACL 2004 Document Understanding Workshop, Boston, pages 112–119. A Stack Exchange Dataset The Stack Exchange channels used for the dataset are: ai (i.e., ai.stackexchange.com), android, arduino, cs, datascience, emacs, engineering, freelancing, iot, opendata, opensource, patents, programmers, robotics, salesforce, sharepoint, travel, unix, webapps, and workplace. 2203 Figure 5: Histograms of News and Stack Exchange vocabularies showing the number of target domain joint vocabulary word tokens that are unrepresented in the source training data. B Cross-Domain Vocabulary Coverage For the expanded, joint vocabulary of source and target, Figure 5 shows that the number of News target tokens not represented by StackExchange vocabulary terms is much larger than the number of Stack Exchange target tokens not represented by News vocabulary terms. When trained on source only, these unrepresented target domain tokens are neither trained nor handled by the pointergenerator mechanism. Adversarial Domain Adaptation enables training of the encoder on these target tokens. Artificial Titles enable the decoder to be trained on these tokens.
2019
211
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2204–2213 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2204 BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization Eva Sharma1, Chen Li2, and Lu Wang1 1Khoury College of Computer Sciences, Northeastern University 2Tencent AI Lab [email protected], [email protected] [email protected] Abstract Most existing text summarization datasets are compiled from the news domain, where summaries have a flattened discourse structure. In such datasets, summary-worthy content often appears in the beginning of input articles. Moreover, large segments from input articles are present verbatim in their respective summaries. These issues impede the learning and evaluation of systems that can understand an article’s global content structure as well as produce abstractive summaries with high compression ratio. In this work, we present a novel dataset, BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries. Compared to existing summarization datasets, BIGPATENT has the following properties: i) summaries contain a richer discourse structure with more recurring entities, ii) salient content is evenly distributed in the input, and iii) lesser and shorter extractive fragments are present in the summaries. Finally, we train and evaluate baselines and popular learning models on BIGPATENT to shed light on new challenges and motivate future directions for summarization research. 1 Introduction There has been a growing interest in building neural abstractive summarization systems (See et al., 2017; Paulus et al., 2017; Gehrmann et al., 2018a), which requires large-scale datasets with high quality summaries. A number of summarization datasets have been explored so far (Sandhaus, 2008; Napoles et al., 2012; Hermann et al., 2015; Grusky et al., 2018). However, as most of them are acquired from news articles, they share specific characteristics that limit current state-of-theart models by making them more extractive rather than allowing them to understand input content and generate well-formed informative summaries. Sample CNN/Daily Mail News Summary An explosion rocks a chemical plant in China’s southeastern Fujian province for the second time in two years. Six were injured after the explosion and are being hospitalized. The explosion was triggered by an oil leak, though local media has not reported any toxic chemical spills. Sample BIGPATENT Summary A shoelace cover incorporating an interchangeable fashion panel for covering the shoelaces of a gym shoe. The shoelace cover is secured to the shoe by a number of straps threaded through slots in the shoelace cover. These straps secured to each side of the gym shoe include a loop and hook material such that the straps can be disengaged and the shoelace cover can be drawn back to expose the shoelaces. . . Figure 1: Sample summaries from CNN/Daily Mail and BIGPATENT. Extractive fragments reused from input are underlined. Repeated entities indicating discourse structure are highlighted in respective colors. Specifically, in these datasets, the summaries are flattened narratives with a simpler discourse structure, e.g., entities are rarely repeated as illustrated by the news summary in Fig. 1. Moreover, these summaries usually contain long fragments of text directly extracted from the input. Finally, the summary-worthy salient content is mostly present in the beginning of the input articles. We introduce BIGPATENT1, a new large-scale summarization dataset consisting of 1.3 million patent documents with human-written abstractive summaries. BIGPATENT addresses the aforementioned issues, thus guiding summarization research to better understand the input’s global structure and generate summaries with a more complex and coherent discourse structure. The key features of BIGPATENT are: i) summaries exhibit a richer discourse structure with entities re1BIGPATENT dataset is available to download online at evasharma.github.io/bigpatent. 2205 curring in multiple subsequent sentences as shown in Fig. 1, ii) salient content is evenly distributed in the document, and iii) summaries are considerably more abstractive while reusing fewer and shorter phrases from the input. To further illustrate the challenges in text summarization, we benchmark BIGPATENT with baselines and popular summarization models, and compare with the results on existing large-scale news datasets. We find that many models yield noticeably lower ROUGE scores on BIGPATENT than on the news datasets, suggesting a need for developing more advanced models to address the new challenges presented by BIGPATENT. Moreover, while existing neural abstractive models produce more abstractive summaries on BIGPATENT, they tend to repeat irrelevant discourse entities excessively, and often fabricate information. These observations demonstrate the importance of BIGPATENT in steering future research in text summarization towards global content modeling, semantic understanding of entities and relations, and discourse-aware text planning to build abstractive and coherent summarization systems. 2 Related Work Recent advances in abstractive summarization show promising results in generating fluent and informative summaries (Rush et al., 2015; Nallapati et al., 2016; Tan et al., 2017; Paulus et al., 2017). However, these summaries often contain fabricated and repeated content (Cao et al., 2018). Fan et al. (2018) show that, for content selection, existing models rely on positional information and can be easily fooled by adversarial content present in the input. This underpins the need for global content modeling and semantic understanding of the input, along with discourse-aware text planning to yield a well-formed summary (McKeown, 1985; Barzilay and Lapata, 2008). Several datasets have been used to aid the development of text summarization models. These datasets are predominantly from the news domain and have several drawbacks such as limited training data (Document Understanding Conference2), shorter summaries (Gigaword (Napoles et al., 2012), XSum (Narayan et al., 2018), and Newsroom (Grusky et al., 2018)), and near-extractive summaries (CNN / Daily Mail dataset (Hermann et al., 2015)). Moreover, due to the nature of 2https://duc.nist.gov/ Dataset # Doc Comp. Dens. Summary Doc ratio # word # sent # word CNN/DM 312,085 13.0 3.8 55.6 3.8 789.9 NYT 654,788 12.0 2.4 44.9 2.0 795.9 NEWSROOM 1,212,726 43.0 9.5 30.4 1.4 750.9 XSUM 226,711 18.8 1.2 23.3 1.0 431.1 ARXIV 215,913 39.8 3.8 292.8 9.6 6,913.8 PUBMED 133,215 16.2 5.8 214.4 6.9 3,224.4 BIGPATENT 1,341,362 36.4 2.4 116.5 3.5 3,572.8 Table 1: Statistics of BIGPATENT and other summarization datasets. # Doc: raw number of documents in each dataset. For all other columns, mean values are reported over all documents. BIGPATENT has a lower extractive fragment density (Dens.) and a higher compression ratio (Comp. ratio). news reporting, summary-worthy content is nonuniformly distributed within each article. ArXiv and PubMed datasets (Cohan et al., 2018), which are collected from scientific repositories, are limited in size and have longer yet extractive summaries. Thus, existing datasets either lack crucial structural properties or are limited in size for learning robust deep learning methods. To address these issues, we present a new dataset, BIGPATENT, which guides research towards building more abstractive summarization systems with global content understanding. 3 BIGPATENT Dataset We present BIGPATENT, a dataset consisting of 1.3 million U.S. patent documents collected from Google Patents Public Datasets using BigQuery (Google, 2018)3. It contains patents filed after 1971 across nine different technological areas. We use each patent’s abstract as the goldstandard summary and its description as the input.4 Additional details for the dataset, including the preprocessing steps, are in Appendix A.1. Table 1 lists statistics, including compression ratio and extractive fragment density, for BIGPATENT and some commonly-used summarization corpora. Compression ratio is the ratio of the number of words in a document and its summary, whereas density is the average length of the ex3Released and maintained by IFI CLAIMS Patent Services and Google, and licensed under Creative Commons Attribution 4.0 International License. 4The summarization task studied using BIGPATENT is notably different from traditional patent summarization task where patent claims are summarized into a more readable format (Cinciruk, 2015). 2206 1 st 2 nd 3 rd 4 th 0 10 20 30 40 50 60 occurence (%) CNN/DM NYT Newsroom XSum arXiv PubMed BigPatent Figure 2: % of salient unigrams present in the N th segments of the input. tractive fragment5 to which each word in the summary belongs (Grusky et al., 2018). Among existing datasets, CNN/DM (Hermann et al., 2015), NYT (Napoles et al., 2012), NEWSROOM (released) (Grusky et al., 2018) and XSUM (Narayan et al., 2018) are news datasets, while ARXIV and PUBMED (Cohan et al., 2018) contain scientific articles. Notably, BIGPATENT is significantly larger with longer inputs and summaries. 4 Dataset Characterization 4.1 Salient Content Distribution Inferring the distribution of salient content in the input is critical to content selection of summarization models. While prior work uses probabilistic topic models (Barzilay and Lee, 2004; Haghighi and Vanderwende, 2009) or relies on classifiers trained with sophisticated features (Yang et al., 2017), we focus on salient words and their occurrences in the input. We consider all unigrams, except stopwords, in a summary as salient words for the respective document. We divide each document into four equal segments and measure the percentage of unique salient words in each segment. Formally, let U be a function that returns all unique unigrams (except stopwords) for a given text. Then, U(di) denotes the unique unigrams in the ith segment of a document d, and U(y) denotes the unique unigrams in the corresponding summary y. The percentage of salient unigrams in the ith segment of a document is calculated as: |(U(di) ∩U(y))| × 100 |U(y)| % Fig. 2 shows that BIGPATENT has a fairly even distribution of salient words in all segments of the 5Extractive fragments are the set of shared sequences of tokens in the document and summary. n=1 n=2 n=3 n=4 0 50 100 Novel n-grams (%) CNN/DM NYT Newsroom XSum arXiv PubMed BigPatent Figure 3: % of novel n-grams in the summaries. input. Only 6% more salient words are observed in the 1st segment than in other segments. In contrast, for CNN/DM, NYT and Newsroom, approximately 50% of the salient words are present in the 1st segment, and the proportion drops monotonically to 10% in the 4th segment. This indicates that most salient content is present in the beginning of news articles in these datasets. For XSum, another news dataset, although the trend in the first three segments is similar to BIGPATENT, the percentage of novel unigrams in the last segment drops by 5% compared to 0.2% for BIGPATENT. For scientific articles (arXiv and PubMed), where content is organized into sections, there is a clear drop in the 2nd segment where related work is often discussed, with most salient information being present in the first (introduction) and last (conclusion) sections. Whereas in BIGPATENT, since each embodiment of a patent’s invention is sequentially described in its document, it has a more uniform distribution of salient content. Next, we probe how far one needs to read from the input’s start to cover the salient words (only those present in input) from the summary. About 63% of the sentences from the input are required to construct full summaries for CNN/DM, 57% for XSum, 53% for NYT, and 29% for Newsroom. Whereas in the case of BIGPATENT, 80% of the input is required. The aforementioned observations signify the need of global content modeling to achieve good performance on BIGPATENT. 4.2 Summary Abstractiveness and Coherence Summary n-gram Novelty. Following prior work (See et al., 2017; Chen and Bansal, 2018), we compute abstractiveness as the fraction of novel n-grams in the summaries that are absent from the input. As shown in Fig. 3, XSum comprises of notably shorter but more abstractive summaries. Besides that, BIGPATENT reports the sec2207 t = 1 t = 2 t = 3 t ≥3 CNN/DM 95.7% 3.9% 0.4% 0.1% NYT 97.6% 2.1% 0.3% 0.1% NEWSROOM 98.9% 1.0% 0.1% 0.02% ARXIV 89.5% 7.9% 1.7% 0.9% PUBMED 86.1% 9.3% 2.7% 2.0% BIGPATENT 75.9% 15.1% 5.1% 3.9% Table 2: % of entities occurring t times in summaries. Ent. Chain Length (In %) Ent. Recurrence at Datasets l = 1 l = 2 l = 3 l > 3 t + 1 t + 2 ≥t + 3 CNN/DM 97.7 2.1 0.2 0.02 0.3 0.2 0.2 NYT 98.7 1.2 0.1 0.01 0.4 0.2 0.1 NEWSROOM 99.6 0.4 0.02 0.002 0.2 0.1 0.1 ARXIV 95.6 3.8 0.5 0.1 1.6 1.0 3.8 PUBMED 93.9 4.9 0.9 0.3 2.0 1.1 2.1 BIGPATENT 85.9 11.1 2.3 0.7 2.4 1.1 1.2 Table 3: Left: % of entities of chain length l. Right: Avg. number of entities that appear at the tth summary sentence and recur in a later sentence. ond highest percentage of novel n-grams, for n ∈ {2, 3, 4}. Significantly higher novelty scores for trigram and 4-gram indicate that BIGPATENT has fewer and shorter extractive fragments, compared to others (except for XSum, a smaller dataset). This further corroborates the fact that BIGPATENT has the lowest extractive fragment density (as shown in Table 1) and contains longer summaries. Coherence Analysis via Entity Distribution. To study the discourse structure of summaries, we analyze the distribution of entities that are indicative of coherence (Grosz et al., 1995; Strube and Hahn, 1999). To identify these entities, we extract non-recursive noun phrases (regex NP →ADJ∗[NN]+) using NLTK (Loper and Bird, 2002). Finally, we use the entity-grid representation by Barzilay and Lapata (2008) and their coreference resolution rules to capture the entity distribution across summary sentences. In this work, we do not distinguish entities’ grammar roles, and leave that for future study. On average, there are 6.7, 10.9, 12.4 and 18.5 unique entities in the summaries for Newsroom, NYT, CNN/DM and BIGPATENT, respectively6. PUBMED and ARXIV reported higher number of unique entities in summaries (39.0 and 48.1 respectively) since their summaries are considerably longer (Table 1). Table 2 shows that 24.1% of entities recur in BIGPATENT summaries, which is higher than that on other datasets, indicating more 6We exclude XSum as its summaries are all one-sentence. complex discourse structures in its summaries. To understand local coherence in summaries, we measure the longest chain formed across sentences by each entity, denoted as l. Table 3 shows that 11.1% of the entities in BIGPATENT appear in two consecutive sentences, which is again higher than that of any other dataset. The presence of longer entity chains in the BIGPATENT summaries suggests its higher sentence-to-sentence relatedness than the news summaries. Finally, we examine the entity recurrence pattern which captures how many entities, first occurring in the tth sentence, are repeated in subsequent (t + ith) sentences. Table 3 (right) shows that, on average, 2.3 entities in BIGPATENT summaries recur in later sentences (summing up the numbers for t+2 and after). The corresponding recurring frequency for news dataset such as CNN/DM is only 0.4. Though PUBMED and ARXIV report higher number of recurrence, their patterns are different, i.e., entities often recur after three sentences. These observations imply a good combination of local and global coherence in BIGPATENT. 5 Experiments and Analyses We evaluate BIGPATENT with popular summarization systems and compare with well-known datasets such as CNN/DM and NYT. For baseline, we use LEAD-3, which selects the first three sentences from the input as the summary. We consider two oracles: i) ORACLEFRAG builds summary using all the longest fragments reused from input in the gold-summary (Grusky et al., 2018), and ii) ORACLEEXT selects globally optimal combination of three sentences from the input that gets the highest ROUGE-1 F1 score. Next, we consider three unsupervised extractive systems: TEXTRANK (Mihalcea and Tarau, 2004), LEXRANK (Erkan and Radev, 2004), and SUMBASIC (Nenkova and Vanderwende, 2005). We also adopt RNN-EXT RL (Chen and Bansal, 2018), a SEQ2SEQ model that selects three salient sentences to construct the summary using reinforcement learning. Finally, we train four abstractive systems: SEQ2SEQ with attention, PointerGenerator (POINTGEN) and a version with coverage mechanism (POINTGEN + COV) (See et al., 2017), and SENTREWRITING (Chen and Bansal, 2018). Experimental setups and model parameters are described in Appendix A.2. Table 4 reports F1 scores of ROUGE-1, 2, 2208 CNN/DM NYT BIGPATENT Models R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L LEAD-3 40.23 17.52 36.34 32.93 17.69 29.58 31.27 8.75 26.18 ORACLEFRAG (Grusky et al., 2018) 93.36 83.19 93.36 88.15 74.74 88.15 91.85 78.66 91.85 ORACLEEXT 49.35 27.96 46.24 42.62 26.39 39.50 43.56 16.91 36.52 TEXTRANK (Mihalcea and Tarau, 2004) 37.72 15.59 33.81 28.57 14.29 23.79 35.99 11.14 29.60 LEXRANK (Erkan and Radev, 2004) 33.96 11.79 30.17 27.32 11.93 23.75 35.57 10.47 29.03 SUMBASIC (Nenkova and Vanderwende, 2005) 31.72 9.60 28.58 23.16 7.18 20.06 27.44 7.08 23.66 RNN-EXT RL (Chen and Bansal, 2018) 41.47 18.72 37.76 39.15 22.60 34.99 34.63 10.62 29.43 SEQ2SEQ (Sutskever et al., 2014) 31.10 11.54 28.56 41.57 26.89 38.17 28.74 7.87 24.66 POINTGEN (See et al., 2017) 36.15 15.11 33.22 43.49 28.70 39.66 30.59 10.01 25.65 POINTGEN+COV (See et al., 2017) 39.23 17.09 36.03 45.13 30.13 39.67 33.14 11.63 28.55 SENTREWRITING (Chen and Bansal, 2018) 40.04 17.61 37.59 44.77 29.10 41.55 37.12 11.87 32.45 Table 4: ROUGE scores on three large datasets. The best results for non-baseline systems are in bold. Except for SentRewriting on CNN/DM and NYT, for all abstractive models, we truncate input and summaries at 400 and 100. % Novel n-grams % Entities Occurring m Times Models n = 1 n = 2 m = 1 m = 2 m = 3 m > 3 GOLD 21.5% 57.7% 75.5% 15.2% 5.2% 4.0% SEQ2SEQ 18.6% 52.0% 51.4% 19.4% 6.7% 22.6% POINTGEN + COV 9.7% 33.9% 82.7% 13.8% 2.4% 1.2% SENTREWRITING 11.5% 44.9% 69.5% 17.3% 6.6% 6.6% Table 5: % of novel n-grams (highest % are highlighted), and % of entities occurring m times in generated summaries of BIGPATENT. POINTGEN+COV repeats entities less often than humans do. and L (Lin and Hovy, 2003) for all models. For BIGPATENT, almost all models outperform the LEAD-3 baseline due to the more uniform distribution of salient content in BIGPATENT’s input articles. Among extractive models, TEXTRANK and LEXRANK outperform RNN-EXT RL which was trained on only the first 400 words of the input, again suggesting the need for neural models to efficiently handle longer input. Finally, SENTREWRITING, a reinforcement learning model with ROUGE as reward, achieves the best performance on BIGPATENT. Table 5 presents the percentage of novel ngrams in the generated summaries. Although the novel content in the generated summaries (for both unigrams and bigrams) is comparable to that of GOLD, we observe repeated instances of fabricated or irrelevant information. For example, “the upper portion is configured to receive the upper portion of the sole portion”, part of SEQ2SEQ generated summary has irrelevant repetitions compared to the human summary as in Fig. 1. This suggests the lack of semantic understanding and control for generation in existing neural models. Table 5 also shows the entity distribution (§4.2) in the generated summaries for BIGPATENT. We find that neural abstractive models (except POINTGEN+COV) tend to repeat entities more often than humans do. For GOLD, only 5.2% and 4.0% of entities are mentioned thrice or more, compared to 6.7% and 22.6% for SEQ2SEQ. POINTGEN+COV, which employs coverage mechanism to explicitly penalize repetition, generates significantly fewer entity repetitions. These findings indicate that current models failt to learn the entity distribution pattern, suggesting a lack of understanding of entity roles (e.g., their importance) and discourse-level text planning. 6 Conclusion We present the BIGPATENT dataset with humanwritten abstractive summaries containing fewer and shorter extractive phrases, and a richer discourse structure compared to existing datasets. Salient content from the BIGPATENT summaries is more evenly distributed in the input. BIGPATENT can enable future research to build robust systems that generate abstractive and coherent summaries. Acknowledgements This research is supported in part by National Science Foundation through Grants IIS-1566382 and IIS-1813341, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. We also thank the anonymous reviewers for their constructive suggestions. 2209 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Federico Barrios, Federico L´opez, Luis Argerich, and Rosa Wachenchauzer. 2016. Variations of the similarity function of textrank for automated summarization. arXiv preprint arXiv:1602.03606. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1). Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI). Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Association for Computational Linguistics. David Cinciruk. 2015. Patent summarization and paraphrasing. http://www.ece.drexel.edu/ walsh/David_PatentSummarization. pdf. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615–621. Association for Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457–479. Lisa Fan, Dong Yu, and Lu Wang. 2018. Robust neural abstractive summarization systems and evaluation against adversarial information. In Workshop on Interpretability and Robustness in Audio, Speech, and Language (IRASL). Neural Information Processing Systems. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018a. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. Association for Computational Linguistics. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018b. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. Google. 2018. Google patents public datasets: connecting public, paid, and private patent data. https://console.cloud. google.com/marketplace/details/ google_patents_public_datasets/ google-patents-public-data?_ga=2. 148226999.-1648178590.1534442735& pli=1. Accessed: 2018-08-30. Barbara J. Grosz, Scott Weinstein, and Aravind K. Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2). Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719. Association for Computational Linguistics. Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 362–370. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. 2210 Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Kathleen R McKeown. 1985. Discourse strategies for generating natural-language text. Artificial Intelligence, 27(1):1–41. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290. Association for Computational Linguistics. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX), pages 95–100. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Association for Computational Linguistics. Ani Nenkova and Lucy Vanderwende. 2005. The impact of frequency on summarization. Microsoft Research, Redmond, Washington, Tech. Rep. MSR-TR2005, 101. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083. Association for Computational Linguistics. Michael Strube and Udo Hahn. 1999. Functional centering grounding referential coherence in information structure. Computational Linguistics, 25(3). Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1181. Association for Computational Linguistics. USPTO. 2013. Cooperative patent classification scheme. https://www.uspto.gov/web/ patents/classification/cpc/html/ cpc.html. Accessed: 2018-08-30. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yinfei Yang, Forrest Bao, and Ani Nenkova. 2017. Detecting (un)important content for single-document news summarization. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 707–712. Association for Computational Linguistics. A Appendices A.1 Dataset Details BIGPATENT, a novel large-scale summarization dataset of 1.3 million US Patent documents, is collected from Google Patents Public Datasets using BigQuery (Google, 2018). Google has indexed more than 87 million patents with full text from 17 different patent offices so far. We only consider patent documents from United States Patent and Trademark Office (USPTO) filed in English language after 1971 in order to get considerably more consistent writing and formatting style to facilitate easier parsing of the text. Each US patent application is filed under a Cooperative Patent Classification (CPC) code (USPTO, 2013) that provides a hierarchical system of language independent symbols for the classification of patents according to the different areas of technology to which they pertain. There are nine such classification categories: A (Human Necessities), B (Performing Operations; Transporting), C (Chemistry; Metallurgy), D (Textiles; Paper), E (Fixed Constructions), F (Mechanical 2211 CPC code # Doc Comp. Dens. Summary Doc ratio # word # sent # word A 193,483 39.5 2.3 109.5 3.4 3,520.7 B 179,467 28.1 2.3 116.6 3.4 2,900.4 C 112,269 71.3 2.6 97.9 2.6 5,278.4 D 11,294 30.1 2.3 113.0 3.2 2,892.1 E 38,271 26.9 2.2 117.2 3.7 2,814.3 F 95,076 26.0 2.3 116.7 3.5 2,737.8 G 287,706 35.9 2.4 123.7 3.6 3,924.1 H 285,577 32.7 2.4 121.1 3.6 3,531.4 Y 138,219 33.5 2.3 116.3 3.5 3,328.0 Table 6: Statistics for 9 CPC codes in BIGPATENT. Engineering; Lightning; Heating; Weapons; Blasting), G (Physics), H (Electricity), and Y (General tagging of new or cross-sectional technology). Table 6 summarizes the statistics for BIGPATENT across all nine categories. From the full public dataset, for each patent record, we retained its title, authors, abstract, claims of the invention and the description text. Abstract of the patent, which is generally written by the inventors after the patent application is approved, was considered as the gold-standard summary of the patent. Description text of the patent contains several other fields such as background of the invention covering previously published related inventions, description of figures, and detailed description of the current invention. For the summarization task, we considered the detailed description of each patent as the input. We tokenized the articles and summaries using Natural Language Toolkit (NLTK) (Bird et al., 2009). Since there was a large variation in size of summary and input texts, we removed patent records with compression ratio less than 5 and higher than 500. Further, we only kept records with summary length between 10 and 2, 500 words, and input length of at least 150 and at most 80, 000. Next, to focus on the abstractive summary-input pairs, we removed the records whose percentage of summary-worthy unigrams absent from the input (novel unigrams) was less than 15%. Finally, we removed references of figure from summaries and input, along with full tables from the input. Salient Content Distribution (bigrams and longest common subsequences). As also shown in the main paper, i.e., Figure 4 and Figure 5, BIGPATENT demonstrates a relatively uniform distribution of the salient content from the summary 1 st 2 nd 3 rd 4 th 0 20 40 60 80 occurence (%) CNN/DM NYT Newsroom XSum arXiv PubMed BigPatent Figure 4: % of salient bigrams present in N th segment of input. 1 st 2 nd 3 rd 4 th 0 10 20 30 40 50 60 occurence (%) CNN/DM NYT Newsroom XSum arXiv PubMed BigPatent Figure 5: % of salient longest common subsequences present in N th segment of input. in all parts of the input. Here, the salient content is considered as all bigrams and longest common sub-sequences from the summary. A.2 Experiment details For all experiments, we randomly split BIGPATENT into 1, 207, 222 training pairs, 67, 068 validation pairs, and 67, 072 test pairs. For CNN/DM, we followed preprocessing steps from See et al. (2017), using 287, 226 training, 13, 368 validation, and 11, 490 test pairs. For NYT, following preprocessing steps from Paulus et al. (2017), we used 589, 298 training, 32, 739 validation, and 32, 739 test pairs. Extract-based Systems. For TEXTRANK, we used the summanlp7 (Barrios et al., 2016) to generate summary with three sentences based on TEXTRANK algorithm (Mihalcea and Tarau, 2004). For LEXRANK and SUMBASIC, we used sumy8. For RNN-EXT RL from Chen and Bansal (2018), we used the implementation provided by the authors9. Abstract-based Systems. For all the neural abstractive summarization models (except for SENTREWRITING), we truncated the input to 400 words and output to 100 words. Except for SENTREWRITING, all other models were trained us7https://pypi.org/project/summa/ 8https://pypi.python.org/pypi/sumy 9https://github.com/ChenRocks/fast abs rl 2212 ing OpenNMT-py python library10 based on the instructions provided by the authors (Gehrmann et al., 2018b). We provide further details for each model below. SEQ2SEQ with attention (Sutskever et al., 2014) was trained using a 128-dimensional wordembedding and 512-dimensional 1-layer LSTM. We used a bidirectional LSTM for the encoder and attention mechanism from Bahdanau et al. (2014). The model was trained using Adagrad (Duchi et al., 2011) with learning rate 0.15 and an initial accumulator value of 0.1. At inference time, we used the beam size 5. We used the same settings for training POINTGEN and POINTGEN + COV (See et al., 2017), adding the copy attention mechanism that allows the model to copy words from the source. At inference time, for POINTGEN + COV, we used coverage penalty with beta set to 5 and length penalty (Wu et al., 2016) with alpha as 0.9. For SENTREWRITING from Chen and Bansal (2018), we again used the implementation by the authors11 to train their full RL-based model using their default parameters. A.3 Summaries for sample Input Document from BIGPATENT For the sample summary presented in introduction of the main paper, in Table 7 we list complete gold-standard summary along with the summaries generated by SEQ2SEQ, POINTGEN + COV and SENTREWRITING. For the respective input, we also list the first 400 words for brevity. 10https://opennmt.net/OpenNMT-py/Summarization.html 11https://github.com/ChenRocks/fast abs rl 2213 Gold-Standard summary a shoelace cover incorporating an interchangeable fashion panel for covering the shoelaces of a gym shoe. the shoelace cover is secured to the shoe by a number of straps threaded through slots in the shoelace cover. a strap secured to each side of the gym shoe includes a loop and hook material such that the straps can be disengaged and the shoelace cover can be drawn back to expose the shoelaces of the shoe. the fashion panel is attached to the shoelace cover by a loop and hook material such that at the whim of the shoe wearer, the fashion panel can be replaced by other fashion panels to convey a fashion statement. SEQ2SEQ generated summary a shoe having a sole portion and an upper portion. the sole portion includes an upper portion and a lower portion. the upper portion is configured to receive the upper portion of the sole portion. the lower portion of the upper portion is configured to receive the upper portion of the sole portion. POINTGEN + COV generated summary a gym shoe and associated shoelace shoe is disclosed. the shoe includes a sole portion, a shoelace cover, and an upper portion. the upper portion has a toe area that extends from the toe area to the opening. the shoelace cover is curved to the shoelace. SENTREWRITING generated summary a gym shoe and associated shoelace cover and associated shoelace cover and fashion panel are disclosed. the shoe includes a sole portion and an upper portion. the shoelace cover is a semi-rigid panel that is curved to conform to the shoelace area of the shoelace area. the shoelace area is generally split into a shoelace area and a shoelace area. a shoe for use in a shoe, such as a shoe, is disclosed. a tongue extends from the toe area to the shoelace. Input (first 400 words) the following discussion of the preferred embodiment concerning a gym shoe and associated shoelace cover and fashion panel is merely exemplary in nature and is in no way intended to limit the invention or its applications or uses. the shoe includes a sole portion, generally comprised of a rugged rubber material, and an upper portion 14 generally comprised of a durable and pliable leather or canvas material. at a back location of the upper portion is an opening for accepting a wearer’s foot. a cushion is visible through the opening on which the wearer’s foot is supported. at a front end of the upper portion is a toe area. extending from the toe area to the opening is a shoelace area. the shoelace area is generally split such that a shoelace is threaded through eyelets associated with the shoelace area in order to bind together the shoelace area and secure the shoe to the wearer’s foot. a tongue, also extending from the toe area to the opening, is positioned beneath the shoelace such that the tongue contacts the wearer’s foot, and thus provides comfort against the shoelace to the wearer. the basic components and operation of a gym shoe is well understood to a person of normal sensibilities, and thus, a detailed discussion of the parts of the shoe and their specific operation need not be elaborated on here. secured to the upper portion of the shoe covering the shoelace area is a shoelace cover. in a preferred embodiment, the shoelace cover is a semi-rigid panel that is curved to be shaped to conform to the shoelace area such that an upper portion of the shoelace cover extends a certain distance along the sides of the upper portion adjacent the opening. the shoelace cover narrows slightly as it extends towards the toe area. the specifics concerning the shape, dimensions, material, rigidity, etc. of the shoelace cover will be discussed in greater detail below. additionally, the preferred method of securing the shoelace cover to the shoe will also be discussed below. in a preferred embodiment, affixed to a top surface of the shoelace cover is a fashion panel. the fashion panel is secured to the shoelace cover by an applicable securing mechanism, such as a loop and hook and/or velcro type fastener device, so that the fashion panel can be readily removed from the shoelace cover and replaced with an alternate fashion panel having a different design. Table 7: Gold-standard and system generated summaries for BIGPATENT. Input (pre-processed) is truncated to 400 words for brevity.
2019
212
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2214–2220 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2214 Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference Tobias Falke1∗, Leonardo F. R. Ribeiro2, Prasetya Ajie Utama2, Ido Dagan3 and Iryna Gurevych2 1Amazon, 2Research Training Group AIPHES and UKP Lab, Technische Universit¨at Darmstadt, Germany, 3Bar-Ilan University, Ramat-Gan, Israel [email protected], {ribeiro,utama}@aiphes.tu-darmstadt.de, [email protected], [email protected] Abstract While recent progress on abstractive summarization has led to remarkably fluent summaries, factual errors in generated summaries still severely limit their use in practice. In this paper, we evaluate summaries produced by state-of-the-art models via crowdsourcing and show that such errors occur frequently, in particular with more abstractive models. We study whether textual entailment predictions can be used to detect such errors and if they can be reduced by reranking alternative predicted summaries. That leads to an interesting downstream application for entailment models. In our experiments, we find that outof-the-box entailment models trained on NLI datasets do not yet offer the desired performance for the downstream task and we therefore release our annotations as additional test data for future extrinsic evaluations of NLI. 1 Introduction The general success of deep learning techniques and the availability of large-scale singledocument summarization datasets, such as the CNN-DailyMail (CNN-DM) corpus (Hermann et al., 2015), have recently led to a renewed interest in abstractive summarization. Following the pioneering works of Rush et al. (2015), Chopra et al. (2016) and Nallapati et al. (2016), many models have been developed in recent years that can all generate summaries by freely choosing words from a large vocabulary rather than reusing full sentences from the input document. While neural models have been very successful at producing fluent text with this approach, a ∗The work was done while the first author was also affiliated to the research training group AIPHES at TU Darmstadt. Source Sentence: prince george could be days away of becoming an older brother as the duchess is due to give birth to her second child mid-to-late april. Summary Sentence: prince george is due to give birth to her second child mid-to-late april. Figure 1: Example of an incorrect summary sentence produced by PGC (see Section 4) on CNN-DM. downside is that there is less guarantee than in extractive approaches that the content of the summary is factually correct. Such models regularly introduce errors as illustrated in Figure 1, where the summary sentence is clearly not supported by the document. For sentence summarization, Cao et al. (2018) found up to 30% of summaries to be incorrect. That greatly reduces their usefulness, as a user cannot trust the content of the summary. In this paper, we follow the idea that all information in a summary should be entailed by the source document. We study the use of natural language inference (NLI) (Bowman et al., 2015), also known as textual entailment (Dagan et al., 2006), to detect factual errors. In particular, we test whether entailment predictions of NLI models can be used to rerank generated summaries such that more correct ones are preferred. Such a reranking approach can be easily combined with any recent summarization model and allows us to clearly quantify the impact of using NLI. Our contributions and the organization of this paper are the following: First, we describe how the correctness of a generated summary can be verified efficiently via crowdsourcing. Second, we report correctness estimates for summaries generated by three recent abstractive summarization 2215 systems, showing that even recent state-of-the-art models have errors in 25% of their summaries. Finally, we compare different NLI models regarding their ability to rank more correct summaries above incorrect alternatives. Here, our main finding is that models trained on NLI datasets transfer poorly to our downstream task, limiting the effectiveness of reranking. To improve NLI models for this setup, we release our collected annotations to be used as additional test data in future work.1 2 Related Work Previous work already proposed the use of explicit proposition structures (Cao et al., 2018) and multi-task learning with NLI (Li et al., 2018; Pasunuru et al., 2017) to successfully improve the correctness of abstractive sentence summaries. In this work, we instead focus on the more challenging single-document summarization, where longer summaries allow for more errors. Very recently, Fan et al. (2018) showed that with ideas similar to Cao et al. (2018)’s work, the correctness of document summaries can also be improved. Moreover, Guo et al. (2018) and Pasunuru and Bansal (2018) proposed to use NLI-based loss functions or multi-task learning with NLI for document summarization. But unfortunately, their experiments do not evaluate whether the techniques improve summarization correctness. We are the first to use NLI in a reranking setup, which is beneficial for this study as it allows to us to clearly isolate the net impact of the NLI component. 3 Evaluating Summary Correctness Similar to previous work by Cao et al. (2018) and Li et al. (2018), we argue that the correctness of a generated summary can only be reliably evaluated by manual inspection. But in contrast to previous studies, we rely on crowdsourcing to make the evaluation more efficient. In our crowdsourcing interface, we show a summary sentence by sentence on the left and the full source document on the right. For every summary sentence, a worker assigns the label correct, if the information is entailed by the document, incorrect, if it contradicts the document or contains information not present2, or unclear, if the worker cannot 1The data is available at https://tudatalib.ulb. tu-darmstadt.de/handle/tudatalib/2002. 2In NLI terms, information not present in the document would be neutral w.r.t the document, but in a summary it is still undesired, as all its content should be entailed. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 0.55 0.6 0.65 0.7 0.75 # Annotators κ Figure 2: Agreement between crowdsourced and expert annotations at increasing numbers of workers. decide. In particular, as we cannot assume that crowdworkers are familiar with the term entailment, we ask them whether a summary sentence is “correct given the information in the article”. As many generated sentences are largely extractive, our interface also highlights the sentence in the source document with the highest word overlap, helping the worker to find the relevant information faster. We pay workers $0.20 per task (labeling all sentences of one summary). Given the correctness labels for every sentence, we first merge the labels collected from different annotators. A summary then receives the label incorrect if at least one of its sentences has been labeled as such, otherwise, it is labeled as correct. A challenge of crowdsourcing is that workers are untrained and some might produce low quality annotations (Sabou et al., 2014). For our task, an additional challenge is that some errors are rather subtle, while on the other hand the majority of summary sentences are correct, which requires workers to carry out the task very carefully to catch these rare cases. We use MACE (Hovy et al., 2013), a Bayesian model that incorporates the reliability of individual workers, to merge sentence-level labels. We also performed an experiment to determine the necessary number of workers to obtain reliable labels. Two annotators from our lab labeled 50 generated summaries (140 sentences) manually and then merged their labels to obtain a gold standard. For the same data, we collected 14 labels per sentence from crowdworkers. Figure 2 shows the agreement, measured as Cohen’s κ, between the MACE-merged labels of different subsets of the crowdsourced labels and the gold standard. We find that the agreement is substantial with at least 3 workers and that it plateaus at 9, with κ at 0.74. 2216 Model Incorrect ROUGE-1 ROUGE-2 ROUGE-L Length PGC (See et al., 2017) 8% 39.49% 17.24% 36.35% 59.7 FAS (Chen and Bansal, 2018) 26% 40.88% 17.80% 38.53% 72.1 BUS (Gehrmann et al., 2018) 25% 41.52% 18.76% 38.60% 54.4 Table 1: Fraction of incorrect summaries produced by recent summarization systems on the CNN-DM test set, evaluated on a subset of 100 summaries. ROUGE scores (on full test set) and average summary length for reference. Source: [...] jim jepps used a blog called the daily maybe to defend “rape fantasies”, describe paedophiles as “complex human beings” and question why teachers who have relationships with pupils are put on the sex offenders register. [...] PGC: green party leader natalie bennett used a blog called the daily maybe to defend [...] Source: (cnn) if newly revised nypd training materials are approved by a federal judge, new cadets could be taking courses reminding them “not to engage in racial profiling.” [...] FAS: new: new nypd training materials are approved by a federal judge. [...] [if missing] Source: england’s first-choice right-back at the world cup looks set to leave liverpool after six years this summer. [...] BUS: england’s premier league clubs set to leave liverpool after six years this summer. [...] Figure 3: Examples of incorrect sentences produced by different summarization models on the CNN-DM test set. 4 Correctness of State-of-the-Art Models Using the crowd-based evaluation, we assessed the correctness of summaries for a randomly sampled subset of 100 summaries from the CNN-DM test set. We included three summarization models: PGC The pointer-generator model with coverage as introduced by See et al. (2017). FAS The hybrid extractive-abstractive system proposed by Chen and Bansal (2018) including their redundancy-based reranking. BUS The bottom-up summarization system recently proposed by Gehrmann et al. (2018). To the best of our knowledge, BUS is the state-ofthe-art abstractive model on the non-anonymized version of CNN-DM as of writing this, while FAS is only slightly behind. We use the original generated summaries provided by the authors and crowdsource correctness labels using 9 workers. Table 1 shows the evaluation results3. In line with the findings for sentence summarization (Cao et al., 2018; Li et al., 2018), we observe that factual errors are also a frequent problem for document summarization. Interestingly, the fraction of incorrect summaries is substantially higher for FAS and BUS compared to PGC. The length of the 3The ROUGE scores have been recomputed by us on the used data and match the reported scores very closely. generated summaries appears to be unrelated to the number of errors. Instead, the higher abstractiveness of summaries produced by FAS and BUS, as analyzed in their respective papers, seems to also increase the chance of introducing errors. In addition, we also observe that among the three systems correctness and ROUGE scores do not correlate, emphasizing one more time that a ROUGEbased evaluation alone is far too limited to account for the full scope of the summarization task. Figure 3 shows an incorrect summary sentence for each model. Common mistakes are using wrong subjects or objects in a proposition (examples 1 and 3), confusing numbers, reporting hypothetical facts as factual (example 2) or attributing quotes to the wrong person. Especially BUS and FAS often combine a subject and an object from different parts of a complex sentence such that a new, not-entailed proposition is formed, as demonstrated by the example in Figure 1. 5 Reranking based on NLI Predictions Having seen that incorrect facts are an issue in state-of-the-art summarization models, we now turn to leveraging NLI to address this issue. 5.1 Reranking Approach Our reranking approach follows the idea that everything in a summary should be entailed by the source document. Given a document D and sum2217 marization system S, we assume that S can produce a list of k alternative summaries S0, ..., Sk of D. As most models typically search for the best summary sequence with beam search, k alternative summaries can be easily obtained by keeping all hypotheses from a beam search with size k. Let N be an NLI model that predicts the probability N(p, h) that sentence h is entailed by sentence p. We score each summary alternative Si, consisting of sentences si0, ..., sin, heuristically based on its entailment probability given the document D, with sentences d ∈D, as follows: σ(Si) = 1 n n X j=1 max d∈D N(d, sij) We max over the sentences of the source document, as it is sufficient for a summary sentence to be entailed by one source sentence, but average over the summary sentences, as all of them should be entailed. Out of the k summary alternatives, the one with the highest score σ(Si) is the new predicted summary after reranking. 5.2 Experiments We perform two experiments using NLI models for summary-level and sentence-level reranking. NLI Models In our experiments, we test five NLI models. We use Parikh et al. (2016)’s decomposable attention model (DA) and Chen et al. (2017)’s enhanced sequential inference model (ESIM) as reimplemented and augmented with ELMO embeddings (Peters et al., 2018) by AllenNLP.4 Further, we also include our own implementations of InferSent (Conneau et al., 2017) and shortcut-stacked encoders (SSE) (Nie and Bansal). And finally, we include a version of BERT-base (Devlin et al., 2019) fine-tuned on MultiNLI (Williams et al., 2018). DA and ESIM have been trained on SNLI 1.0 (Bowman et al., 2015), achieving 86.4% and 88.5% accuracy; InferSent and SSE were trained on MultiNLI, achieving 70.3% and 73.7% mismatched dev set accuracy. The fine-tuned BERT model has 83.6% mismatched accuracy on MultiNLI. Summary Reranking To avoid the repeated effort of post-hoc correctness evaluations, we first created an annotated dataset from the validation part of CNN-DM. For 200 documents, we sampled 5 hypotheses out of a beam with size 100 and 4https://allennlp.org/ Split NLI Model Incor. ∆ ↑ ↓ Val Original 42.1% Random 50.7% +8.6 16 26 DA 51.4% +9.3 13 23 SSE 45.8% +3.7 18 22 ESIM 39.3% -2.8 23 20 InferSent 38.3% -3.8 24 20 BERT 28.0% -14.1 25 10 Test Original 26.0% ESIM 29.0% +3.0 11 14 Table 2: Fraction of incorrect summaries at first position after reranking with different NLI models. ↑and ↓show the absolute number of improved (incorrect replaced by correct) and worsened (vice versa) instances. crowdsourced correctness labels for the resulting 1000 summaries. Since the availability of at least one correct summary hypothesis is a prerequisite of the reranking approach, we rely on FAS which uses a variant of beam search yielding more diverse hypotheses (Li et al., 2016). We use the code and pretrained model provided by the authors. For 107 out of the 200 documents, an incorrect and correct summary is among the 5 alternatives. Table 2 shows that in this sample from the validation data, the fraction of incorrect summaries at first position, when the 5 alternatives are ranked as during beam search, is at 42.1%. Using entailment probabilities of ESIM and InferSent, we can slightly improve upon that and reduce incorrect summaries. However, with DA and SSE, more incorrect summaries end up in the first position. Note that these results are not in line with the model’s NLI accuracies, underlining that performance on NLI does not directly transfer to our task. Only for BERT, which outperforms the other models on NLI by a large margin, we also see substantially better reranking performance. But even for this powerful model, more than half of the errors still remain in the summaries.5 Interestingly, we also find that for ESIM and InferSent, reranking hurts in many cases, leaving just a few cases of net improvement. Given the validation results, we then applied reranking to the CNN-DM test data followed by a post-hoc correctness evaluation as in Section 4. We used the ESIM model and reranked all 100 5Note that the construction of the validation dataset ensures that the fraction of incorrect summaries can be reduced to 0% by reranking. For the test data, the lower bound is not known (as not all 100 hypotheses have been annotated). 2218 Source: the home which was built for former australian prime minister malcolm fraser and his wife tamie has been opened for inspection just a day after his sudden passing. IS DA SSE ESIM BERT Correct: the home was built for former prime minister malcolm fraser and his wife tamie. 34% 86% 54% 94% 99% Incorre.: the home was built for inspection, just a day after his sudden passing. 99% 96% 99% 96% 96% Figure 4: Two alternative sentences from generated summaries, one correct and one incorrect, for the given source sentence. All tested NLI models predict very high entailment probabilities for the incorrect sentence, with only BERT estimating a slightly higher probability for the correct alternative. beam hypotheses generated by FAS.6 In contrast to the validation sample, the fraction of incorrect summaries increases from 26% to 29% (Table 2), demonstrating that the slight improvement on the validation data does not transfer to the test set. Sentence Ranking To better understand the effect of NLI models, we carried out a second experiment that factors out some complexities of reranking. From the sampled and annotated validation data, we derived 373 triples of a source sentences d and two summary sentences, one correct (s+) and one incorrect (s−), covering the same content. We test how often the NLI models prefer the wrong sentence, i.e. N(d, s−) ≥N(d, s+). Table 3 shows the results. Here, ESIM performs best, followed by BERT. InferSent, while being slightly better than ESIM before, performs worse in this setup, demonstrating that the raw NLI performance does not directly correspond to the reranking performance. In general, we see that all five models leave a large gap to human performance, which we determined via crowdsourcing. Discussion Looking at the data, we found many examples for which the NLI predictions are not as expected (as shown in Figure 4), although the incorrect sentence can be easily spotted by humans. One reason for this could be the domain shift from SNLI and MultiNLI to the newswire text of CNN-DM, suggesting that data from more diverse genres is needed. Another known issue is that NLI models tend to rely on simplifying heuristics such as lexical overlap (McCoy et al., 2019), explaining the high entailment probability that even BERT predicts for the incorrect sentence in Figure 4. These results and examples illustrate that 6When performing this manual evaluation, we unfortunately did not have the fine-tuned BERT model available. NLI Model Incorrect ∆ Random 50.0% DA 42.6% -7.4 InferSent 41.3% -8.7 SSE 37.3% -12.7 BERT 35.9% -14.1 ESIM 32.4% -17.6 Human 16.1% -33.9 Table 3: Fraction of incorrectly ordered sentence pairs using different NLI models’ entailment predictions and crowdsourced human performance on the dataset. current NLI models are not yet robust enough for our downstream task. On the other hand, the stateof-the-art performance on common NLI datasets is already very close to human performance (Nikita and Bowman, 2019), suggesting that new datasets, such as the one presented here, are necessary to expose the models’ remaining limitations. 6 Conclusions We addressed the issue of factual errors in abstractive summaries, a severe problem that we demonstrated to be common even with state-of-the-art models. While entailment predictions should help with this issue, out-of-the-box NLI models do not perform well on the task. Our proposed task and collected data can therefore be a valuable resource for future extrinsic evaluations of NLI models. Acknowledgements This work has been supported by the German Research Foundation through the research training group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES, GRK 1994/1) and the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1). 2219 References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A Large Annotated Corpus for Learning Natural Language Inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the Original: Fact Aware Neural Abstractive Summarization. In Proceedings of the ThirySecond AAAI Conference on Artificial Intelligence, pages 4784–4791, New Orleans, LA, USA. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for Natural Language Inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1657–1668, Vancouver, Canada. Yen-Chun Chen and Mohit Bansal. 2018. Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 675–686, Melbourne, Australia. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive Sentence Summarization with Attentive Recurrent Neural Networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93– 98, San Diego, CA, USA. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual Entailment Challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Textual Entailment, pages 177–190, Berlin, Heidelberg. Springer Berlin Heidelberg. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171––4186, Minneapolis, MN, USA. Lisa Fan, Dong Yu, and Lu Wang. 2018. Robust Neural Abstractive Summarization Systems and Evaluation against Adversarial Information. In NIPS 2018 Interpretability and Robustness for Audio, Speech and Language Workshop, Montreal, Canada. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-Up Abstractive Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 687– 697, Melbourne, Australia. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems, pages 1693–1701, Montreal, Canada. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning Whom to Trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120–1130, Atlanta, GA, USA. Haoran Li, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2018. Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1430–1441, Santa Fe, NM, USA. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A Simple, Fast Diverse Decoding Algorithm for Neural Generation. ArXiv preprint 1611.08562. R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive Text Summarization using Sequence-tosequence RNNs and Beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Yixin Nie and Mohit Bansal. Shortcut-Stacked Sentence Encoders for Multi-Domain Inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, pages 41–45, Copenhagen, Denmark. Nangia Nikita and Samual R. Bowman. 2019. Human vs. Muppet: A Conservative Estimate of Human Performance on the GLUE Benchmark. In Proceedings 2220 of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A Decomposable Attention Model for Natural Language Inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255, Austin, Texas. Ramakanth Pasunuru and Mohit Bansal. 2018. MultiReward Reinforced Summarization with Saliency and Entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 646–653, New Orleans, LA, USA. Ramakanth Pasunuru, Han Guo, and Mohit Bansal. 2017. Towards Improving Abstractive Summarization via Entailment Generation. In Proceedings of the Workshop on New Frontiers in Summarization, pages 27–32, Copenhagen, Denmark. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237, New Orleans, LA, USA. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Marta Sabou, Kalina Bontcheva, Leon Derczynski, and Arno Scharl. 2014. Corpus Annotation through Crowdsourcing: Towards Best Practice Guidelines. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, Reykjavik, Iceland. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083, Vancouver, Canada. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1112–1122, New Orleans, Louisiana.
2019
213
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2221–2227 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2221 Self-Supervised Learning for Contextualized Extractive Summarization Hong Wang†, Xin Wang†, Wenhan Xiong†, Mo Yu‡, Xiaoxiao Guo‡, Shiyu Chang‡, William Yang Wang† † University of California, Santa Barbara ‡ IBM Research {hongwang600, xwang, xwhan, william}@cs.ucsb.edu, [email protected], {xiaoxiao.guo, shiyu.chang}@ibm.com Abstract Existing models for extractive summarization are usually trained from scratch with a crossentropy loss, which does not explicitly capture the global context at the document level. In this paper, we aim to improve this task by introducing three auxiliary pre-training tasks that learn to capture the document-level context in a self-supervised fashion. Experiments on the widely-used CNN/DM dataset validate the effectiveness of the proposed auxiliary tasks. Furthermore, we show that after pretraining, a clean model with simple building blocks is able to outperform previous state-ofthe-art that are carefully designed. 1 1 Introduction Extractive summarization aims at shortening the original article while retaining the key information through the way of selection sentences from the original articles. This paradigm has been proven effective by many previous systems (Carbonell and Goldstein, 1998; Mihalcea and Tarau, 2004; McDonald, 2007; Cao et al., 2015). In order to decide whether to choose a particular sentence, the system should have a global view of the document context, e.g., the subject and structure of the document. However, previous works (Nallapati et al., 2017; Al-Sabahi et al., 2018; Zhou et al., 2018; Zhang et al., 2018) usually directly build an end-to-end training system to learn to choose sentences without explicitly modeling the document context, counting on that the system can automatically learn the document-level context. We argue that it is hard for these end-to-end systems to learn to leverage the document context from scratch due to the challenges of this task, and a well pre-trained embedding model that incorporates document context should help on this 1Code can be found in this repository: https:// github.com/hongwang600/Summarization Last week, I went to attend a one-day meeting. I booked the flight in advanced. [masked sentence] The earliest next flight will be a few days later. I had to use the online discussion instead. But the flight was cancelled due to the weather. But I lost my passport. The meeting was cancelled. The weather is good today. Masked Paragraph Candidate Sentences Figure 1: An example for the Mask pre-training task. A sentence is masked in the original paragraph, and the model is required to predicted the missing sentence from the candidate sentences. task. In recent years, extensive works (Pennington et al., 2014; Nie and Bansal, 2017; Lin et al., 2017; Peters et al., 2018; Devlin et al., 2018; Subramanian et al., 2018; Cer et al., 2018; Logeswaran and Lee, 2018; Pagliardini et al., 2018) have been done in learning the word or sentence representations, but most of them only use a sentence or a few sentences when learning the representation, and the document context can hardly be included in the representation. Hence, we introduce new pre-training methods that take the whole document into consideration to learn the contextualized sentence representation with self-supervision. Self-supervised learning (Raina et al., 2007; Doersch et al., 2015; Agrawal et al., 2015; Wang and Gupta, 2015) is a newly emerged paradigm, which aims to learn from the intrinsic structure of the raw data. The general framework is to construct training signals directly from the structured raw data, and use it to train the model. The structure information learned through the process can then be easily transformed and benefit other tasks. Thus self-supervised learning has been widely applied in structured data like text (Okanohara and Tsujii, 2007; Collobert and Weston, 2008; Peters et al., 2018; Devlin et al., 2018; Wu et al., 2019) and images (Doersch et al., 2015; Agrawal et al., 2015; Wang and Gupta, 2015; Lee et al., 2017). 2222 Since documents are well organized and structured, it is intuitive to employ the power of selfsupervised learning to learn the intrinsic structure of the document and model the document-level context for the summarization task. In this paper, we propose three self-supervised tasks (Mask, Replace and Switch), where the model is required to learn the document-level structure and context. The knowledge learned about the document during the pre-training process will be transferred and benefit on the summarization task. Particularly, The Mask task randomly masks some sentences and predicts the missing sentence from a candidate pool; The Replace task randomly replaces some sentences with sentences from other documents and predicts if a sentence is replaced. The Switch task switches some sentences within the same document and predicts if a sentence is switched. An illustrating example is shown in Figure 1, where the model is required to take into account the document context in order to predict the missing sentence. To verify the effectiveness of the proposed methods, we conduct experiments on the CNN/DM dataset (Hermann et al., 2015; Nallapati et al., 2016) based on a hierarchical model. We demonstrate that all of the three pre-training tasks perform better and converge faster than the basic model, one of which even outperforms the state-of-the-art extractive method NEUSUM (Zhou et al., 2018). The contributions of this work include: • To the best of our knowledge, we are the first to consider using the whole document to learn contextualized sentence representations with selfsupervision and without any human annotations. • We introduce and experiment with various self-supervised approaches for extractive summarization, one of which achieves the new state-ofthe-art results with a basic hierarchical model. • Benefiting from the self-supervised pretraining, the summarization model is more sample efficient and converges much faster than those trained from scratch. 2 Model and Pre-training Methods 2.1 Basic Model As shown in Figure 2, our basic model for extractive summarization is mainly composed of two parts: a sentence encoder and a document-level self-attention module. The sentence encoder is a bidirectional LSTM (Hochreiter and SchmidhuX₁ X₂ Xₙ … LSTM LSTM LSTM … S₁ S₂ Sₙ … Self Attention Self Attention Self Attention … D₁ D₂ Dₙ … Linear Linear Linear Figure 2: The structure of the Basic Model. We use LSTM and self-attention module to encode the sentence and document respectively. Xi represent the word embedding for sentence i. Si and Di represent the independent and document involved sentence embedding for sentence i respectively. ber, 1997), which encodes each individual sentence Xi (a sequence of words) and whose output vector at the last step is viewed as the sentence representation Si. Given the representations of all the sentences, a self-attention module (Vaswani et al., 2017) is employed to incorporate document-level context and learn the contextualized sentence representation Di for each sentence.2 Finally, a linear layer is applied to predict whether to choose the sentence to form the summary. 2.2 Self-supervised Pre-training Methods In this section, we will describe three selfsupervised pre-training approaches. Through solving each pre-training task, the model is expected to learn the document-level contextualized sentence embedding model from the raw documents, which will then be used to solve the downstream summarization task. Note that we are only pretraining the sentence encoder and documentlevel self-attention module of the basic model for extractive summarization. Mask Similar to the task of predicting missing word, the Mask task is to predict the masked sentence from a candidate pool. Specifically, we first mask some sentences within a document with the probability Pm and put these masked sentences (xm 1 , xm 2 , · · · , xm t ) into a candidate pool T m. The 2We leave the combination of different architectures such as replacing the self-attention module with LSTM for future work. 2223 model is required to predict the correct sentence from the pool for each masked position i. We replace the sentence in the masked position i with a special token ⟨unk⟩and compute its document contextualized sentence embedding Di. We use the same sentence encoder in the basic model to obtain the sentence embedding Sm for these candidate sentences in T m. We score each candidate sentence j in T m by using the cosine similarity: Θ(i, j) = cos(Di, Sm j ) To train the model, we adopt a ranking loss to maximize the margin between the gold sentence and other sentences: ℓm = max{0, γ −Θ(i, j) + Θ(i, k)} where γ is a tuned hyper-parameter, j points to the gold sentence in T m for the masked position i, and k points to another non-target sentence in T m. Replace The Replace task is to randomly replace some sentences (with probability Pr) in the document with sentences from other documents, and then predict if a sentence is replaced. Particularly, we use sentences from 10, 000 randomly chosen documents to form a candidate pool T r. Each sentence in the document will be replaced with probability Pr by a random sentence in T r. Let Cr be the set of positions where sentences are replaced. We use a linear layer fr to predict if the sentence is replaced based on the document embedding D, and minimize the MSE loss: ℓr = MSE(fr(Di), yr i ) where yr i = 1 if i ∈Cr (i.e., the sentence in position i has been replaced), otherwise yr i = 0. Switch The Switch task is similar to the Replace task. Instead of filling these selected sentences with sentences out of the document, this task chooses to use sentences within the same document by switching these selected sentences, i.e., each selected sentence will be put in another position within the same document. Let Cs be the set of positions where the sentences are switched. Similarly, we use a linear layer fs to predict if a sentence is switched and minimize the MSE loss: ℓs = MSE(fs(Di), ys i ) where ys i = 1 if i ∈Cs, otherwise ys i = 0. Figure 3: This figure shows the Rouge-2 score for each pre-training method and the basic model on the development set during the training process. We put the result for Rouge-1 and Rouge-L score in Appendix A.2 3 Experiment To show the effectiveness of the pre-training method (Mask, Replace and Switch), we conduct experiments on the commonly used dataset CNN/DM (Hermann et al., 2015; Nallapati et al., 2016), and compare them with a popular baseline Lead3 (See et al., 2017), which selects first three sentences as the summary, and the state-of-theart extractive summarization method NEUSUM (Zhou et al., 2018), which jointly scores and selects sentences using pointer network. 3.1 On CNN/DM Dataset Model and training details We use the rulebased system from (Zhou et al., 2018) to label sentences in a document, e.g., sentences to be extracted will be labeled as 1. Rouge score3 (Lin, 2004) is used to evaluate the performance of the model, and we report Rouge-1, Rouge-2, and Rouge-L as in prior work. We use the pretrained glove embedding (Pennington et al., 2014) with 100 dimensions to initialize the word embedding. A one-layer bidirectional LSTM (Hochreiter and Schmidhuber, 1997) is used as the sentence encoder, and the size of hidden state is 200. A 5layer Transformer encoder (Vaswani et al., 2017) with 4 heads is used as the document-level selfattention module. A linear classification layer is used to predict whether to choose the sentence. The training process consists of two phrases. First, we use the pre-training task to pre-train the basic model using the raw article from the 3We use PyRouge https://pypi.org/project/ pyrouge/ to compute the Rouge score. 2224 Method Rouge-1 Rouge-2 Rouge-L Basic 41.07 18.95 37.56 LEAD3 39.93 17.62 36.21 NEUSUM 41.18∗ 18.84 37.61 Mask 41.15∗ 19.06∗ 37.65∗ Replace 41.21∗ 19.08∗ 37.73∗ Switch 41.36 19.20 37.86 SentEnc 41.17∗ 19.04∗ 37.69∗ Switch 0.15 41.35∗ 19.18∗ 37.85∗ Switch 0.35 41.27∗ 19.12∗ 37.77∗ Table 1: The Rouge (Lin, 2004) scores for the basic model, baselines, pre-training methods, and analytic experiments. All of our Rouge scores have a 95% confidence interval of at most ±0.25 as reported by the official ROUGE script. The best result is marked in bold, and those that are not significantly worse than the best are marked with ∗. CNN/DM dataset without labels. Second, we finetune the pre-trained model for the extractive summarization task using the sentence labels. The learning rate is set as 0.0001 in the pre-training phase and 0.00001 in the fine-tune phase. We train each pre-training task until it is converged or the number of training epochs reaches the upper bound 30. We set the probability to mask, replace or switch sentences as 0.25. Results We show the Rouge score on the development set during the training process in Figure 3, and present the best Rouge score for each method in Table 1. All pre-training methods improve the performance compared with the Basic model. Especially, Switch method achieves the best result on all the three evaluations compared with other pre-training methods, and is even better than the state-of-the-art extractive model NEUSUM4. In the terms of convergence, the Mask, Replace and Switch task takes 21, 24, 17 epochs in the training phase respectively, and 18, 13, 9 epochs to achieve the best performance in the fine-tune phase. The basic model takes 24 epochs to obtain the best result. From Figure 3, we can see that the Switch task converges much faster than the basic model. Even adding on the epochs taken in the pre-training phase, Switch method (26 epochs) 4We use code from https://github.com/ magic282/NeuSum to train the model, and evaluate it using our evaluation script. Results using their script (only include Rouge-1 and Rouge-2) is put in Appendix A.1. takes roughly the same time as the Basic model (24 epochs) to achieve the best performance. 3.2 Ablation Study Reuse only the sentence encoder Our basic model has mainly two components: a sentence encoder and a document-level self-attention module. The sentence encoder focuses on each sentence, while document-level self-attention module incorporates more document information. To investigate the role of the document-level self-attention module, we only reuse the sentence encoder of the pre-train model, and randomly initialize the document-level self-attention module. The results is shown in Table 1 as SentEnc. We can see that using the whole pre-training model (Switch 0.25) can achieve better performance, which indicates the model learn some useful document-level information from the pre-training task. We notice that only using the sentence encoder also get some improvement over the basic model, which means that the pre-training task may also help to learn the independent sentence representation. On the sensitivity of hyper-parameter In this part, we investigate the sensitivity of the model to the important hyper-parameter Pw, i.e., the probability to switch sentences. In the previous experiment, we switch sentences with probability 0.25. We further try the probability of 0.15 and 0.35, and show the results in Table 1 as Switch 0.15 and Switch 0.35. We can see Switch 0.15 achieve basically the same result as Switch 0.25, and Switch 0.35 is slightly worse. So the model is not so sensitive to the hyper-parameter of the probability to switch sentences, and probability between 0.15 and 0.25 should be able to work well. 4 Conclusion In this paper, we propose three self-supervised tasks to force the model to learn about the document context, which will benefit the summarization task. Experiments on the CNN/DM verify that through the way of pre-training on our proposed tasks, the model can perform better and converge faster when learning on the summarization task. Especially, through the Switch pre-training task, the model even outperforms the state-of-theart method NEUSUM (Zhou et al., 2018). Further analytic experiments show that the document context learned by the document-level self-attention module will benefit the model in summarization 2225 task, and the model is not so sensitive to the hyperparameter of the probability to switch sentences. References Pulkit Agrawal, Jo˜ao Carreira, and Jitendra Malik. 2015. Learning to see by moving. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 37–45. IEEE Computer Society. Kamal Al-Sabahi, Zuping Zhang, and Mohammed Nadher. 2018. A hierarchical structured selfattentive model for extractive document summarization (HSSAS). IEEE Access, 6:24205–24212. Ziqiang Cao, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. 2015. Ranking with recursive neural networks and its application to multi-document summarization. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 2530, 2015, Austin, Texas, USA., pages 2153–2159. AAAI Press. Jaime G. Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR ’98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia, pages 335–336. ACM. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for english. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 169–174. Association for Computational Linguistics. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of ACM International Conference Proceeding Series, pages 160– 167. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Carl Doersch, Abhinav Gupta, and Alexei A. Efros. 2015. Unsupervised visual representation learning by context prediction. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 1422–1430. IEEE Computer Society. Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693–1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. 2017. Unsupervised representation learning by sorting sequences. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 667–676. IEEE Computer Society. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Zhouhan Lin, Minwei Feng, C´ıcero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In International Conference on Learning Representations. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In International Conference on Learning Representations. Ryan T. McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Advances in Information Retrieval, 29th European Conference on IR Research, ECIR 2007, Rome, Italy, April 2-5, 2007, Proceedings, volume 4425 of Lecture Notes in Computer Science, pages 557–564. Springer. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing , EMNLP 2004, A meeting of SIGDAT, a Special Interest Group of the ACL, held in conjunction with ACL 2004, 25-26 July 2004, Barcelona, Spain, pages 404–411. ACL. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3075– 3081. AAAI Press. Ramesh Nallapati, Bowen Zhou, C´ıcero Nogueira dos Santos, C¸ aglar G¨ulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 280–290. ACL. 2226 Yixin Nie and Mohit Bansal. 2017. Shortcutstacked sentence encoders for multi-domain inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, RepEval@EMNLP 2017, Copenhagen, Denmark, September 8, 2017, pages 41–45. Association for Computational Linguistics. Daisuke Okanohara and Jun’ichi Tsujii. 2007. A discriminative language model with pseudo-negative samples. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. The Association for Computational Linguistics. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 528– 540. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227–2237. Association for Computational Linguistics. Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. 2007. Self-taught learning: transfer learning from unlabeled data. In Machine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007), Corvallis, Oregon, USA, June 20-24, 2007, volume 227 of ACM International Conference Proceeding Series, pages 759–766. ACM. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073– 1083. Association for Computational Linguistics. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J. Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. CoRR, abs/1804.00079. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Xiaolong Wang and Abhinav Gupta. 2015. Unsupervised learning of visual representations using videos. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2794–2802. IEEE Computer Society. Jiawei Wu, Xin Wang, and William Yang Wang. 2019. Self-supervised dialogue learning. In ACL 2019, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. The Association for Computational Linguistics. Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural latent extractive document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 779–784. Association for Computational Linguistics. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 1520, 2018, Volume 1: Long Papers, pages 654–663. Association for Computational Linguistics. 2227 A Appendix (a) Rouge-1 (b) Rouge-L Figure 4: The Rouge-1 and Rouge-L score for each pre-training method and the basic model on the development set during the training process. A.1 Evaluation results using scripts from NEUSUM Method Rouge-1 Rouge-2 Basic 41.13 18.97 Mask 41.21∗ 19.07∗ Replace 41.27∗ 19.09∗ Switch 41.41 19.22 LEAD3 39.98 17.63 NEUSUM− 41.23∗ 18.85 Table 2: The Rouge (Lin, 2004) score for basic model, the pre-training methods, and the baselines. We use the script from https://github.com/magic282/ NeuSum to compute the Rouge score. All of our Rouge scores have a 95% confidence interval of at most ±0.22 as reported by the official ROUGE script. The best result for each score is marked in bold, and those that are not significantly worse than the best are marked with ∗. A.2 Rouge-1 and Rouge-L results The Rouge-1 and Rouge-L results are shown in Figure 4, from which we can see that the Switch method achieves the best performance.
2019
214
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2228–2234 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2228 On the Summarization of Consumer Health Questions Asma Ben Abacha Dina Demner-Fushman [email protected] [email protected] LHNCBC, U.S. National Library of Medicine, Bethesda, MD Abstract Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16%. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization. 1 Introduction Teaching machines how to automatically understand natural language questions to retrieve relevant answers is still a challenging task. Different factors increase the complexity of the task such as the question length (cf. Figure 1), the lexical heterogeneity when describing the same information need, and the lack of domain-specific training datasets. Improving Question Answering (QA) has been the focus of multiple research efforts in recent years. Several efforts proposed interactive and non-interactive query relaxation techniques to translate the input questions into structured queries covering specific elements of the questions (Yahya et al., 2013; Mottin et al., 2014; Ben Abacha and Zweigenbaum, 2015; Meng et al., 2017). Other efforts focused on (i) identifying question similarity (Nakov et al., 2016, 2017) and Figure 1: Consumer health questions and associated summaries from the gold standard. The entities in Red are the foci (main entities). The words in Blue and underlined are the triggers of the question types. question entailment (Ben Abacha and DemnerFushman, 2019b) in order to retrieve similar or entailed questions that have associated answers, or (ii) paraphrasing the questions and submitting the simplified versions to QA systems (Bordes et al., 2014; Dong et al., 2017). Question simplification or summarization was less studied than the summarization of news articles that has been the focus of neural abstractive methods in recent years (Rush et al., 2015; Nallapati et al., 2016; Chopra et al., 2016; See et al., 2017). In this paper, we tackle the task of consumer health question summarization. Consumer health questions are a natural candidate for this task as patients and their families tend to provide numerous peripheral details such as the patient history (Roberts and Demner-Fushman, 2016), that are not always needed to find correct answers. Recent experiments also showed the 2229 key role of question summarization in improving the performance of QA systems (Ben Abacha and Demner-Fushman, 2019a). We present three main contributions: (i) we define Question Summarization as generating a condensed question expressing the minimum information required to find correct answers to the original question, and we create a new corpus1 of 1K consumer health questions and their summaries based on this definition (cf. Figure 1); (ii) we explore data augmentation techniques, including semantic selection from open-domain datasets, and study the behavior of state-of-the-art neural abstractive models on the original and augmented datasets; (iii) we present a detailed error analysis and discuss potential areas of improvements for consumer health question summarization. We present related work in the following section. The abstractive models and data creation and augmentation methods are presented in section 3. We present the evaluation in section 4 and discuss the results and error analysis in section 5. 2 Related Work With the recent developments in neural machine translation and generative models (Bahdanau et al., 2014), text summarization has been focusing on abstractive models for sentence or headline generation and article summarization (Rush et al., 2015; Nallapati et al., 2016; Gehrmann et al., 2018). In particular, Rush et al. (2015) proposed an approach for the abstractive summarization of sentences combining a neural language model with a contextual encoder (Bahdanau et al., 2014). For text summarization, Nallapati et al. (2016) proposed a recurrent and attentional encoder-decoder network that takes into account out-of-vocabulary words with a pointer mechanism. This copy mechanism can combine the advantages of both extractive and abstractive summarization (Gu et al., 2016). See et al. (2017) used a hybrid pointer-generator network combining a sequence-to-sequence (seq2seq) attentional model with a similar pointer network (Vinyals and Le, 2015) and a coverage mechanism (Tu et al., 2016). They achieved the best performance of 39.53% ROUGE-1 on the CNN/DailyMail dataset of 312k news articles. Abstractive summarization models have mainly been trained and evaluated on news articles due to the availability of large scale news 1github.com/abachaa/MeQSum datasets. Fewer efforts tackled other subtasks with different inputs, such as summarization of opinions, conversations or emails (Dubou´e, 2012; Li et al., 2016; Angelidis and Lapata, 2018). In this paper we focus on the summarization of consumer health questions. To the best of our knowledge, only Ishigaki et al. (2017) studied the summarization of lengthy questions in the open domain. They created a dataset from a community question answering website by using the question-title pairs as question-summary pairs, and compared extractive and abstractive summarization models. Their results showed that an abstractive model based on an encoder-decoder and a copying mechanism achieves the best performance of 42.2% ROUGE-2. 3 Methods We define the question summarization task as generating a condensed question expressing the minimum information required to find correct answers to the original question. 3.1 Summarization Models We study two encoder-decoder-attention architectures that achieved state-of-the-art results on open domain summarization datasets. Sequence-to-sequence attentional model. This model is adopted from Nallapati et al. (2016). The encoder consists of a bidirectional LSTM layer fed with input word embeddings trained from scratch for the summarization task. The decoder also consists of a bidirectional LSTM layer. An attentional distribution (Bahdanau et al., 2014) is computed from the encoder’s LSTM to build a context vector that is combined with the decoder embeddings to predict the word that is most likely to come next in the sequence. Pointer-generator network. This model is adopted from See et al. (2017). It extends the sequence-to-sequence attentional model with pointer network (Vinyals and Le, 2015) that has a flexible copying mechanism allowing to either generate the next word or point to a location in the source text. The decision on whether to generate the new word or to point back to a source location is made by using a probability function as a soft switch. This probability is computed from dense connections to the decoder’s input and hidden state and the context vector. This design is particularly suited to deal with words outside of 2230 Method Type Examples #1 MeQSum Consumer Health Question I suffered a massive stroke on [DATE] with paralysis on my left side of my body, I’m home and conduct searches on the internet to find help with recovery, and always this product called neuroaid appears claiming to restore function. to my knowledge it isn’t approved by the FDA, but it sounds so promising. do you know anything about it and id there anything approved by our FDA, that does help? Summary What are treatments for stroke paralysis, including neuroaid? #2 Augmentation with Clinical Data Clinical Question 55-year-old woman. This lady has epigastric pain and gallbladder symptoms. How do you assess her gallbladder function when you don’t see stones on the ultrasound? Can a nonfunctioning gallbladder cause symptoms or do you only get symptoms if you have stones? Summary Can a nonfunctioning gallbladder cause symptoms or do you only get symptoms if you have stones? #3 Augmentation with Semantic Selection Medical Question Is it healthy to ingest 500 mg of vitamin c a day? Should I be taking more or less? Summary How much vitamin C should I take a day? Table 1: Examples of question-summary pairs from the created datasets. the target vocabulary in production or test environments. We also test the coverage variant of this model which includes an additional loss term taking into account the diversity of the words that were targeted by the attention layer for a given text Tu et al. (2016). This variant is intended to deal with repetitive word generation issue in sequence to sequence models. 3.2 Data Creation We manually constructed a gold standard corpus, MeQSum, of 1,000 consumer health questions and their associated summaries. We selected the questions from a collection distributed by the U.S. National Library of Medicine (Kilicoglu et al., 2018). Three medical experts performed the manual summarization of the 1K questions using the following guidelines: (i) the summary must allow retrieving correct and complete answers to the original question and (ii) the summary cannot be shortened further without failing to comply with the first condition. All the summaries were then double validated by a medical doctor who also gave the following scores: 1 (perfect summary), 0.5 (acceptable), and 0 (incorrect, and replaced the summary in this case). Based on these scores, the interannotator agreement (IAA) was 96.9%. In method #1, we used 500 pairs for training and 500 pairs for the evaluation of the summarization models. We augmented the training set incrementally with two different methods. In the first augmentation method (#2) we added a set of 4,655 pairs of clinical questions asked by family doctors and their short versions (Ely et al., 2000). The second (augmented) training set has a total of 5,155 question-summary pairs. Our third method (#3) relies on the semantic selection of relevant question pairs from the Quora open-domain dataset (Shankar Iyer and Csernai, 2017). The source Quora dataset consists of 149,262 pairs of duplicate questions. We selected a first set of candidate pairs where a question A had at least 2 sentences and its duplicate question B had only one sentence. Sentence segmentation was performed using the Stanford parser. This first selection led to a subset of 11,949 pairs. From this subset, we targeted three main medical categories: Diseases, Treatments, and Tests. We extracted the question pairs that have at least one medical entity from these categories. We used MetaMapLite (Demner-Fushman et al., 2017) to extract these entities by targeting a list of 35 UMLS (Lindberg et al., 1993) semantic types2. The final Quora subset constructed by this method contains 2,859 medical pairs. The third (augmented) training set includes the data from the three methods (8,014 training pairs). Table 1 presents example questionsummary pairs from each dataset. 4 Experiments and Results In the pointer generator and the seq2seq models, we use hidden state vectors of 256 dimensions and word embedding vectors of 128 dimensions trained from scratch. We set the size of the source and target vocabularies to 50K and the minimum length of the question summaries to 4 tokens. When applied, the coverage mechanism was started from the first iteration. We use the Adagrad 2acab, anab ,comd, cgab, dsyn, inpo, mobd, neop, patf, sosy, bact, virs, lbpr, diap, lbtr, irda, nsba, vita, strd, phsu, antb, clnd, horm, carb, lipd, topp, aapp, nnon, elii, hops, orch, imft, bacs, inch, opco 2231 optimizer with a learning rate of 0.15 to train the network. At decode-time, we used beam search of size 4 to generate the question summary. Method Training Set R-1 R-2 R-L Seq2seq #1 24.80 13.84 24.27 Attentional #2 28.97 18.34 28.74 Model #3 27.62 15.70 27.11 Pointer #1 35.80 20.19 34.79 Generator #2 42.77 25.00 40.97 (PG) #3 44.16 27.64 42.78 PG+Coverage #1 39.57 23.05 38.45 #2 40.00 24.13 38.56 #3 41.76 24.80 40.50 Table 2: Results of the question summarization models on the gold standard dataset. Results are reported using the ROUGE-1, ROUGE-2, and ROUGE-L measures and presented in Table 2. The pointer generator achieves a ROUGE-1 score of 44.16% when trained on the full training dataset of 8k pairs (Method #3). The coverage mechanism improved the results of the first training set, with a limited number of training pairs (500), but decreased performance on the other training sets. This is maybe explained by the fact that the systems did not generate frequent repetitions when using the second and third training sets, which suggests that the data augmentation methods provided enough coverage and better training for the generation of relevant summaries from the test data. Figure 2 presents an example of a generated summary. 5 Discussion The best performance of 44.16% is comparable to the state-of-the-art results in open-domain text summarization. Interestingly this performance was achieved using a relatively small set of 8K training pairs (2.5% of the size of the CNNDailyMail dataset). Although this observation can be partially explained by the shorter average length of question summaries when compared to news summaries, a ROUGE-1 score of 44.16% suggests that the trained model reached a relatively efficient local optimum with a useful level of abstraction for consumer health question summarization. This result is especially promising, considering (i) the low-frequency nature of most medical entities and (ii) the fact that the model did not rely on external sources of medical knowledge. Figure 2: A summary generated by PG+M#2 method. ROUGE (Lin and Hovy, 2003) is based on n-gram co-occurrences and despite its wide use in summary evaluation, it has some limitations. Metrics specific to question answering, such as POURPRE for the evaluation of answers to definition questions (Lin and Demner-Fushman, 2005), share some of the same limitations and do not capture fluency or semantic correctness of the summary. To study the correlation between ROUGE and human judgment in question summarization, we manually evaluated a subset of 10% of the generated summaries. We randomly selected 50 summaries produced by each PG method (M#1, M#2, and M#3) from the test set. To judge the correctness of the generated summaries, we used three scores: 0 (incorrect summary), 1 (acceptable summary), and 2 (perfect). Table 4 presents the results of the manual evaluation of the summaries. Table 3 presents examples of the generated summaries by each evaluated method. A fair amount of the manually evaluated summaries were extractive, but many were correctly generated, as can be seen in the examples. We manually evaluated the three PG methods that achieved the best performance. These methods do not include coverage which aimed to deal with repetitive word generation issue. From our observations, few generated summaries had the repetition issue (e.g. “where can i find information on genetic genetic genetic genetic genetic ...”). All repetitions were generated by the M#1 method having the smallest training set (500 pairs), which means that having more training instances (5K for M#2 and 8K for M#3) alleviated the repetition problem in question summarization. For a more in-depth analysis, we studied the 2232 Question #1 Kidney failure 3rd stage What foods do I eat? and if I drink lots of water will that help? Is there a book that I can get to understand this disease? Reference where can i find information on stage three kidney failure and what are the nutritional guidelines for it? M1 what are the treatments for failure? M2 kidney failure 3rd stage what foods do i eat? M3 what are the treatments for kidney failure? Question #2 pseudogout @ http://www.nlm.nih.gov/medlineplus/ency/article/000421.htm I see the statement ”There is no known way to prevent this disorder. However, treating other problems that may cause pseudogout may make the condition less severe” which I would like to have explained, especially what those other problems are &how they may be treated. I’m especially interested in whether supplemental calcium may not be good to take. Reference, M1 & M2 what are the treatments for pseudogout? M3 what are the treatments for pseudogout http://www.nlm.nih.gov/medlineplus/ency/article/000421.htm? Table 3: Examples of summaries generated by the three PG methods vs. manually created reference summaries. Score PG+M#1 PG+M#2 PG+M#3 Manual 13% 46% 37% ROUGE-1 35.80% 42.77% 44.16% Table 4: Manual Evaluation of the PG methods’ summaries on 10% of the test set. The manual score is the normalized average score over all summaries. manually generated summaries of the PG+M#3 method on a random 10% subset of the test data. We identified 4 main types of errors that should be tackled in future efforts: T1 (Question Focus3): The question focus is missing or not correctly identified (e.g. “What are the treatments?”). T2 (Question Type): The question type is not the same (e.g. “what are the treatments for williams syndrome?” instead of “where can I get genetic testing for william’s syndrome?”). T3 (Semantic inconsistency): The question type does not apply to the focus category: e.g., “what are the treatments for nulytely?”, where nulytely is a drug name). T4 (Summarization): The summary is either not minimal, or not complete: e.g., the original question contains several sub-questions, but the summary contains only one of them. The examples above are from the results of the method PG+M#3. Table 5 presents the distribution of error types, taking into account multiple error types per summary when they occur. 76% of the errors are related to the question focus and the question type. Interestingly, only 7% of the summaries are semantically inconsistent. These findings suggest that training the networks to take into account the question focus and type is a promising direction for improvement. Such approach could be achieved either through multitask training or 3Main entity in the question. through additional input features, and will be investigated further in our future work. Method T1 T2 T3 T4 PG+M#3 38% 31% 7% 24% Table 5: Distribution of error types. 6 Conclusion We studied consumer health question summarization and introduced the MeQSum corpus of 1K consumer health questions and their summaries, which we make available in the scope of this paper4. We also explored data augmentation methods and studied the behavior of abstractive models on this task. In future work, we intend to examine multitask approaches combining question summarization and question understanding. Acknowledgments This work was supported by the intramural research program at the U.S. National Library of Medicine, National Institutes of Health. We would like to thank Sonya E. Shooshan and Mark Sharp for their help with the manual summarization. References Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. CoRR, abs/1808.08858. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. 4github.com/abachaa/MeQSum 2233 Asma Ben Abacha and Dina Demner-Fushman. 2019a. On the role of question summarization and information source restriction in consumer health question answering. In Proceedings of the AMIA 2019 Informatics Summit, San Francisco, CA, USA, 2019. Asma Ben Abacha and Dina Demner-Fushman. 2019b. A question-entailment approach to question answering. CoRR, abs/1901.08079. Asma Ben Abacha and Pierre Zweigenbaum. 2015. MEANS: A medical question-answering system combining NLP techniques and semantic web technologies. Inf. Process. Manage., 51(5):570–594. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 615–620. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA, June 12-17, 2016, pages 93–98. Dina Demner-Fushman, Willie J. Rogers, and Alan R. Aronson. 2017. Metamap lite: an evaluation of a new java implementation of metamap. JAMIA, 24(4):841–844. Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 875–886. Pablo Ariel Dubou´e. 2012. Extractive email thread summarization: Can we do better than he said she said? In INLG 2012 - Proceedings of the Seventh International Natural Language Generation Conference, 30 May 2012 - 1 June 2012, Starved Rock State Park, Utica, IL, USA, pages 85–89. John W. Ely, Jerome A. Osheroff, Paul N. Gorman, Mark H. Ebell, M. Lee Chambliss, Eric A. Pifer, and P. Zoe Stavri. 2000. A taxonomy of generic clinical questions: classification study. British Medical Journal, 321:429–432. Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4098–4109. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Tatsuya Ishigaki, Hiroya Takamura, and Manabu Okumura. 2017. Summarizing lengthy questions. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, pages 792–800. Halil Kilicoglu, Asma Ben Abacha, Yassine Mrabet, Sonya E. Shooshan, Laritza Rodriguez, Kate Masterton, and Dina Demner-Fushman. 2018. Semantic annotation of consumer health questions. BMC Bioinformatics, 19(1):34:1–34:28. Qiudan Li, Zhipeng Jin, Can Wang, and Daniel Dajun Zeng. 2016. Mining opinion summarizations using convolutional neural networks in chinese microblogging systems. Knowl.-Based Syst., 107:289–300. Chin-Yew Lin and Eduard H. Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLTNAACL 2003, Edmonton, Canada, May 27 - June 1, 2003. Jimmy J. Lin and Dina Demner-Fushman. 2005. Automatically evaluating answers to definition questions. In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British Columbia, Canada, pages 931–938. Donald A. Lindberg, Betsy L. Humphreys, and Alexa T. McCray. 1993. The unified medical language system. Methods of Information in Medicine, 32:281–291. Xiangfu Meng, Xiaoyan Zhang, Yanhuan Tang, and Chongchun Bi. 2017. Adaptive query relaxation and top-k result ranking over autonomous web databases. Knowl. Inf. Syst., 51(2):395–433. Davide Mottin, Alice Marascu, Senjuti Basu Roy, Gautam Das, Themis Palpanas, and Yannis Velegrakis. 2014. IQR: an interactive query relaxation system for the empty-answer problem. In International Conference on Management of Data, SIGMOD 2014, Snowbird, UT, USA, June 22-27, 2014, pages 1095–1098. Preslav Nakov, Doris Hoogeveen, Llu´ıs M`arquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, and Karin Verspoor. 2017. SemEval-2017 task 3: Community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval ’17, Vancouver, Canada. Association for Computational Linguistics. 2234 Preslav Nakov, Llu´ıs M`arquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, Jim Glass, and Bilal Randeree. 2016. Semeval-2016 task 3: Community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACLHLT, San Diego, CA, USA, June 16-17, 2016, pages 525–545. Ramesh Nallapati, Bing Xiang, and Bowen Zhou. 2016. Sequence-to-sequence rnns for text summarization. In International Conference on Learning Representations, Workshop track. Kirk Roberts and Dina Demner-Fushman. 2016. Interactive use of online health resources: a comparison of consumer and professional questions. JAMIA, 23(4):802–811. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 379–389. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073– 1083. Nikhil Dandekar Shankar Iyer and Kornl Csernai. 2017. First quora dataset release: Question pairs. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. CoRR, abs/1506.05869. Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, and Gerhard Weikum. 2013. Robust question answering over the web of linked data. In 22nd ACM International Conference on Information and Knowledge Management, CIKM’13, San Francisco, CA, USA, October 27 - November 1, 2013, pages 1107–1116.
2019
215
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2235–2240 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2235 Unsupervised Rewriter for Multi-Sentence Compression Yang Zhao1, Xiaoyu Shen2, Wei Bi3, Akiko Aizawa4,1 1The University of Tokyo, Tokyo, Japan [email protected] 2Max Planck Institute for Informatics, Saarland, Germany [email protected] 3Tencent AI Lab, Shenzhen, China [email protected] 4National Institute of Informatics, Tokyo, Japan [email protected] Abstract Multi-sentence compression (MSC) aims to generate a grammatical but reduced compression from multiple input sentences while retaining their key information. Previous dominating approach for MSC is the extractionbased word graph approach. A few variants further leveraged lexical substitution to yield more abstractive compression. However, two limitations exist. First, the word graph approach that simply concatenates fragments from multiple sentences may yield nonfluent or ungrammatical compression. Second, lexical substitution is often inappropriate without the consideration of context information. To tackle the above-mentioned issues, we present a neural rewriter for multisentence compression that does not need any parallel corpus. Empirical studies have shown that our approach achieves comparable results upon automatic evaluation and improves the grammaticality of compression based on human evaluation. A parallel corpus with more than 140,000 (sentence group, compression) pairs is also constructed as a by-product for future research. 1 Introduction Multi-sentence compression (MSC) aims to generate a single shorter and grammatical sentence that preserves important information from a group of related sentences. Over the past decade, multisentence compression has attracted considerable attention owing to its potential applications, such as compressing the content to be displayed on screens with limited size (e.g., mobile devices) and benefiting other natural language processing tasks, such as multi-document summarization (Banerjee et al., 2015), opinion summarization, and text simplification. Most existing works rely on the word graph approach initialized in (Filippova, 2010), which offers a simple solution that copies fragments from different input sentences and concatenates them to form the final compression. Later on, a bunch of subsequent research works (Boudin and Morin, 2013; Banerjee et al., 2015; Luong et al., 2015; ShafieiBavani et al., 2016; Pontes et al., 2018; Nayeem et al., 2018) attempted to improve the word graph approach using a variety of strategies, such as keyphrase re-ranking. However, such extraction-based approach may yield nonfluent or ungrammatical compression. A previous study (Nayeem and Chali, 2017) has shown that word graph approaches produce more than 30% of the ungrammatical sentences (evaluated by a chart parser), which is partly due to the non-usage of rewording by these extraction-based approaches. In fact, human annotators tend to compress a sentence through several rewriting operations, such as substitution and rewording (Cohn and Lapata, 2008). Despite some research works that attempt to do the lexical substitution, it is often inappropriate without the consideration of context information. To tackle the above-mentioned problems, we present herein an unsupervised rewriter to improve the grammaticality of compression while introducing an appropriate amount of novel words. Inspired by the unsupervised machine translation (Sennrich et al., 2015; Fevry and Phang, 2018), we adopted the back-translation technique to our setting. Unlike machine translation, in the case of compression task, multiple input sentences and single output compression usually do not have semantic equivalence, which complicates the application of the back-translation technique. Thus, we propose a rewriting scheme that first exploits word graph approach to produce coarse-grained compression (B), based on which we substitute words with their shorter synonyms to yield paraphrased sentence (C). A neural rewriter is subsequently applied to the semantically equivalent (B, C) pairs 2236 in order to improve the grammaticality and encourage more novel words in compression. Our contributions are two-folds:(i) we present a neural rewriter for multi-sentence compression without any parallel data. This rewriter significantly improves the grammaticality and novel word rate, while maintaining the information coverage (informativeness) according to automatic evaluation and (ii) a large-scale multi-sentence compression corpus is introduced along with a manually created test set for future research. We release source code and data here1. 2 Dataset Construction The largest existing English corpus for multisentence compression is the Cornell corpus (McKeown et al., 2010), which has only 300 instances. We introduce herein a large-scale dataset by compiling the English Gigaword2. After preprocessing (e.g., filtering strange punctuations, etc.), 1.37 million news articles were yielded to group related sentences. The full procedure for the dataset construction is available here3. 2.1 Group Related Sentences The prerequisite for multi-sentence compression is that all input sentences should be related to the same topic or event. Inspired by (McKeown et al., 2010), if the sentences are too similar, one of the input sentences could be directly treated as a compression. In contrast, if the sentences are too dissimilar (no interaction), they may describe different events or topics. Both cases should be avoided because sentence compression would not be necessary. Here we use bi-gram similarity, which exhibited the highest accuracy (90%)4. We empirically arrived at 0.2 of the lower threshold of the bigram similarity to avoid very dissimilar sentences and 0.7 of the upper threshold of the bigram similarity to avoid near-identical sentences. As presented in Table 1, 140,572 sentence groups were finally yielded out of 1.37 million new articles. We refer to this as the Giga-MSC dataset. 1http://github.com/code4ai 2https://catalog.ldc.upenn.edu/LDC2011T07 English Gigaword, a comprehensive archive of newswire text data containing seven distinct international sources. 3http://github.com/code4ai/data 4Human judges were asked to evaluate whether the sentences in a group revolved around the same topic or event. A total of 45 out of 50 sentence groups were judged to be qualified. # of sentences in a group # of groups 2 133,123 3 6,633 4 816 In total 140,572 Table 1: Statistics of created Giga-MSC dataset. 2.2 Giga-MSC Dataset Annotation We randomly selected 150 sentences for human annotation, which were used as reference compression in the automatic evaluation. Two annotators5 were asked to generate one single reduced grammatical compression that satisfies two conditions:(1) conveys the important content of all the input sentences and (2) should be grammatically correct. We are interested in how the human annotators will perform this task without vocabulary constraints; hence, we did not tell them to introduce as little new vocabulary as possible in their compression as several previous works did (Boudin and Morin, 2013; Luong et al., 2015). Inter-agreement score Fleiss’ Kappa (Artstein and Poesio, 2008) was also computed. The score was 0.43, demonstrating that moderate agreement was reached. 3 Methodology Figure 1 illustrates our rewriting approach consisting of three steps. 3.1 Step.1 (A→B) Given m input sentences, s1, s2, ..., sm, called A, we use the keyphrase word graph approach (Boudin and Morin, 2013) to obtain coarsegrained compression, called B. 3.2 Step.1 (B→C) C is yielded by substituting words and phrases in B with synonyms. We first identified all the multiword expressions in a sentence and determined all the synonyms in WordNet 3.06. Keep in mind that our goal is to shorten the sentence as much as possible, we specifically substituted multiword expressions, such as police officer, united states of america, with their shorter synonyms policeman and u.s.. Because the size of synonyms in the WordNet dictionary is relatively limited, we also exploit PPDB 2.07 to replace 5Both annotators are native English speakers and not authors of this paper. 6https://wordnet.princeton.edu/ 7https://paraphrase.prg 2237 A: m input sentences B: coarse-grained compression Synonyms substitution Step 1 C: paraphrased compression s1, s2, … sm Step 2 Step 3 B + B' C + C' train forward model with 1M + 140K pairs C: paraphrased compression B: coarse-grained compression train backward model with 140k pairs Word graph approach feed 1 million C’ to pre-trained backward model and yield 1 million B’ as pseudo parallel data Figure 1: Graphic illustration for the rewriter model. A refers to multiple input sentences. B denotes a single compressed sentence using the word graph approach. C is the paraphrased sentence. C′ is a large-scale and indomain monolingual corpus, while B′ refers to the predicted compression by a pre-trained backward model given C′ as input. B + B′ and C + C′ are the mixing datasets. the nouns, verbs, and adjectives with their shorter counterparts. For example, the verb demonstrating is converted into proved. By using the Giga-MSC dataset we created, 140,000 (A, B, C) tuples are yielded. Lexical substitution might lead to nonfluency C but significantly increases the number of novel words. Therefore, the next steps focus on creating pseudo parallel data to boost the fluency of C while attempting to maintain the rate of novel words. 3.3 Step2 (C→B) Because the yielded B and C are semantically equivalent, we train a backward model (C→B) using 140,000 (C, B) pairs. The backward model consisted of a three-layer bi-directional LSTM encoder and a uni-directional decoder with attention mechanism. After the backward model was trained, one million grammatical in-domain sentences C′ were given as input to generate one million B′ The average length of C′ was similar to that of C (30.2 tokens). We also found that C′ maintained a novel rate of approximately 8.9, as compared to B′. 3.4 Step.3 (B+B’→C+C’) We merge the training data (coarse-grained compression B and non-fluent paraphrasing compression C) and the pseudo parallel data (pseudo sentence B′ and grammatical sentence C′) to jointly learn a forward model that consisted of a threelayer LSTM encoder and decoder. The vocabulary and word embedding were shared between both backward and forward models. We expect that because the grammatical C′ accepts the majority of training data, it will improve the fluency of C. 4 Experiments 4.1 Datasets We used two datasets to evaluate the model performance. First is the Giga-MSC dataset detailed in Section 2. A total of 150 annotated sentences were used as the ground truth for testing. Second is the Cornell dataset (McKeown et al., 2010). 4.2 Baseline Approaches We considered (#1) the word graph approach (Filippova, 2010), and an advanced version (#2) keyphrase-based word graph model (Boudin and Morin, 2013) augmented with keyphrase identification (Wan and Xiao, 2008), as our word graph baselines. Additionally, (#3) the hard paraphrasing (Hard-Para) approach directly substituted words and phrases with their shorter synonyms by using WordNet and PPDB 2.0 (size M is chosen with 463,433 paraphrasing pairs). (#4) Seq2seq model was trained using (B, C) pairs. We considered both of them as comparison approaches as well. The training details are presented in Appendix 1. We release the source code here8. 4.3 Out-of-Vocabulary (OOV) Word Handling Both datasets were from the news domain; hence, there are lots of organizations and names that are out of vocabulary. We tackled this problem by exploiting the approach in (Fevry and Phang, 2018). 8https://github.com/code4ai/code 2238 Model METEOR NN-1 NN-2 NN-3 NN-4 Comp. rate Ground truth 8.6 28.0 40.0 49.1 0.50 #1 WG (Filippova, 10) 0.29 0.0 0.0 2.8 6.8 0.34 #2 KWG (Boudin+, 13) 0.36 0.0 0.0 1.1 3.1 0.52 #3 Hard Para. 0.35 10.1 19.7 29.1 38.0 0.51 #4 Seq2seq with attention 0.33 12.7 24.0 34.7 44.4 0.49 #5 Our rewriter (RWT) 0.36 9.0 17.4 25.7 33.8 0.50 Table 2: Results for the Giga-MSC dataset. Model METEOR NN-1 NN-2 NN-3 NN-4 Comp. rate Ground truth 5.2 15.8 23.2 29.6 0.49 #1 WG (Filippova, 10) 0.33 0.0 1.7 5.5 9.8 0.34 #2 KWG (Boudin+, 13) 0.45 0.0 1.8 4.6 8.0 0.52 #3 Hard Para. 0.38 9.2 19.0 28.7 37.7 0.50 #4 Seq2seq with attention 0.37 8.4 18.3 27.6 36.3 0.52 #5 Our rewriter (RWT) 0.40 8.1 17.0 26.0 34.3 0.50 Table 3: Results for the Cornell dataset. Given an input sequence, we first identified all OOV tokens and numbered them in order. We stored the map from the numbered OOV tokens (e.g., OOV1 and OOV2) to words. The corresponding word embeddings were also assigned to each numbered OOV token. We then applied the same numbering system to the target. At the inference, we replaced any output OOV tokens with their corresponding words using the map that was stored beforehand, which allowed us to produce words that were not in the vocabulary. 5 Results and Analysis METEOR metric (n-gram overlap with synonyms) was used for automatic evaluation. The novel ngram rate9 (e.t., NN-1, NN-2, NN-3, and NN-4) was also computed to investigate the number of novel words that could be introduced by the models. Table 2 and Table 3 present the results and below are our observations: (i) keyphrase word graph approach (#2) is a strong baseline according to the METEOR metric. In comparison, the proposed rewriter (#5) yields comparable result on the METEOR metric for the Giga-MSC dataset but lower result for the Cornell dataset. We speculate that it may be due to the difference in the ground-truth compression. 8.6% of novel unigrams exist in the ground-truth compression of the 9Novel n-gram rate = 1 −|S∩C| |C| where S refers to the set of words from all input sentences while C refers set of words from compression. Giga-MSC dataset, while only 5.2% of novel unigrams exist in that of the Cornell dataset, (ii) Hard Para.(#3), Seq2seq (#4), and our rewriter (#5) significantly increase the number of novel n-grams, and the proposed rewriter (#5) seemed to be a better trade-off between the information coverage (measured by METEOR) and the introduction of novel n-grams across all methods, (iii) on comparing with Seq2seq (#4) and our rewriter (#5), we found that adding pseudo data helps to decrease the novel words rate and increase the METEOR score on both datasets. Method Informativeness Grammaticality KWG 1.06 1.19 RWT 1.02 1.40† Table 4: Human evaluation for informativeness and grammaticality. † stands for significantly better than KWG with 0.95 confidence. Human Evaluation As METEOR metric cannot measure the grammaticality of compression, we asked two human raters10 to assess 50 compressed sentences out of the Giga-MSC test dataset in terms of informativeness and grammaticality. We used 0-2 point scale (2 pts: excellent; 1 pts: good; 0 pts: poor), similar to previous work (we recommend readers to refer to Appendix 2 for the 0-2 scale point evaluation details). Table 4 shows the 10Both raters are native English speakers and not authors of this paper. 2239 Sentence1 Alleged Russian mobster Alimzhan Tokhtakhounov, accused of conspiring to fix skating events at the 2002 Winter Olympics in salt lake city, has returned to Moscow, the Kommersant daily reported wednesday. Sentence2 US prosecutors accused Tokhtakhounov of conspiring to fix the artistic skating events at the salt lake city games with the assistance of the French and Russian judges. KWG US prosecutors accused Tokhtakhounov, accused of conspiring to fix the artistic skating events at the salt lake city, has returned to Moscow, the Kommersant daily reported wednesday. RWT Tokhtakhounov, accused of conspiracy to fix the artistic skating events at the salt lake town, has returned to Moscow, the Kommersant daily reported. Table 5: Case study. The words in bold are paraphrase, while the underlined words are ungrammatical parts in the compression. KWG refers to word-graph baseline and RWT refers to our rewriter. average ratings for informativeness and readability. From that, we found that our rewriter (RWT) significantly improved the grammaticality of compression in comparison with the keyphrase word graph approach, implying that the pseudo data may contribute to the language modeling of the decoder, thereby improving the grammaticality. Context Awareness Evaluation Because several novel words were introduced in Hard Para. (#3), Seq2seq (#4), and our rewriter (#5), we were interested to determine whether the compressions generated by these models were context-aware. We herein considered an out-of-the-box contextaware encoder, BERT (Devlin et al., 2018). The evaluation proceeded as follows: As for a sentence with N words, S = [w1, w2, ..., wN], we sequentially masked each word at a time and calculated the average likelihood using this formula: CXT(S) = 1 n !n i=1 −logp(wi|c) where c = [w1, ...wi−1, wi+1, ..., wn]. We used the implementation mentioned in11. The low likelihood CTX(S) may suggest a better context awareness. As presented in Table 6, the proposed rewriter achieves the lowest likelihood on both datasets, thereby indicating better context awareness in its generated compression. Case Study To illustrate the pros and cons of the proposed rewriter, as listed in Table 5, we conducted a case study where two sentences were given as input and two compression outputs were produced by KWG and RWT. We observed that the RWT corrected the ungrammatical parts (e.t., underlined words,) generated by KWG. However, paraphrasing was not always accurate because 11https://github.com/xu-song/bert-as-language-model Method Giga-MSC Cornell Base Large Base Large Hard Para. 354.6 473.6 273.1 316.7 Seq2seq 249.1 219.1 326.1 388.3 Ours 148.5 158.4 203.9 277.4 Table 6: Context awareness scores for three models. Base and Large refer to the different model configurations of BERT. phrases such as salt lake city are fixed collocations. This may degrade the informativeness of the compression. 6 Conclusion In this work, we propose a coarse-to-fine rewriter for multi-sentence compression with a specific focus on improving the quality of compression. The experimental results show that the proposed method produced more grammatical sentences, meanwhile introducing novel words in the compression. Furthermore, we presented an approach for the evaluation of context-awareness which may shed light on automatic evaluation for quality of sentence by virtue of pre-trained models. In the future, we will consider extending the current approach to the single document or multiple document summarization. Acknowledgments This study is supported by the Japan Science and Technology Agencys (JST) CREST program JPMJCR1513. We are thankful to the reviewers’ helpful comments. We also thank professor McKeown for referring us to their data. 2240 References Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Siddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama. 2015. Multi-document abstractive summarization using ilp based multi-sentence compression. In IJCAI, pages 1208–1214. Regina Barzilay and Kathleen R McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3):297– 328. Florian Boudin and Emmanuel Morin. 2013. Keyphrase extraction for n-best reranking in multi-sentence compression. In North American Chapter of the Association for Computational Linguistics (NAACL). Trevor Cohn and Mirella Lapata. 2008. Sentence compression beyond word deletion. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 137–144. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Thibault Fevry and Jason Phang. 2018. Unsupervised sentence compression using denoising autoencoders. arXiv preprint arXiv:1809.02669. Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 322–330. Association for Computational Linguistics. An-Vinh Luong, Nhi-Thao Tran, Van-Giau Ung, and Minh-Quoc Nghiem. 2015. Word graph-based multi-sentence compression: Re-ranking candidates using frequent words. In Knowledge and Systems Engineering (KSE), 2015 Seventh International Conference on, pages 55–60. IEEE. Kathleen McKeown, Sara Rosenthal, Kapil Thadani, and Coleman Moore. 2010. Time-efficient creation of an accurate sentence fusion corpus. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 317–320. Association for Computational Linguistics. Mir Tafseer Nayeem and Yllias Chali. 2017. Paraphrastic fusion for abstractive multi-sentence compression generation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 2223–2226. ACM. Mir Tafseer Nayeem, Tanvir Ahmed Fuad, and Yllias Chali. 2018. Abstractive unsupervised multidocument summarization using paraphrastic sentence fusion. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1191–1204. Elvys Linhares Pontes, St´ephane Huet, Thiago Gouveia da Silva, Andr´ea carneiro Linhares, and JuanManuel Torres-Moreno. 2018. Multi-sentence compression with word vertex-labeled graphs and integer linear programming. In Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12), pages 18–27. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond K Wong, and Fang Chen. 2016. An efficient approach for multi-sentence compression. In Asian Conference on Machine Learning, pages 414–429. Xiaojun Wan and Jianguo Xiao. 2008. Collabrank: towards a collaborative approach to single-document keyphrase extraction. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 969–976. Association for Computational Linguistics.
2019
216
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2241–2251 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2241 Inferential Machine Comprehension: Answering Questions by Recursively Deducing the Evidence Chain from Text Jianxing Yu1,2, Zheng-Jun Zha3, Jian Yin1,2 ∗ 1School of Data and Computer Science, Sun Yat-sen University 2Guangdong Key Laboratory of Big Data Analysis and Processing, China 3School of Information Science and Technology, University of Science and Technology of China {yujx26, issjyin}@mail.sysu.edu.cn [email protected] Abstract This paper focuses on the topic of inferential machine comprehension, which aims to fully understand the meanings of given text to answer generic questions, especially the ones needed reasoning skills. In particular, we first encode the given document, question and options in a context aware way. We then propose a new network to solve the inference problem by decomposing it into a series of attentionbased reasoning steps. The result of the previous step acts as the context of next step. To make each step can be directly inferred from the text, we design an operational cell with prior structure. By recursively linking the cells, the inferred results are synthesized together to form the evidence chain for reasoning, where the reasoning direction can be guided by imposing structural constraints to regulate interactions on the cells. Moreover, a termination mechanism is introduced to dynamically determine the uncertain reasoning depth, and the network is trained by reinforcement learning. Experimental results on 3 popular data sets, including MCTest, RACE and MultiRC, demonstrate the effectiveness of our approach. 1 Introduction Machine comprehension is one of the hot research topics in natural language processing. It measures the machine’s ability to understand the semantics of a given document via answering questions related to the document. Towards this task, many datasets and corresponding methods have been proposed. In most of these datasets, such as CNN/Daily Mail (Hermann et al., 2015), and SQuAD (Rajpurkar et al., 2016), the answer is often a single entity or a text span in the document. That leads to the fact that lots of questions can be solved trivially via word and context matching (Trischler et al., 2016a) instead of ∗Corresponding author. Figure 1: Sample of question needed reasoning skill. Correct answer is marked with an asterisk genuine comprehension on text. To alleviate this issue, some datasets are released, such as MCTest (Richardson et al., 2013), RACE (Lai et al., 2017) and MultiRC (Khashabi et al., 2018), where the answers are not restricted to be the text spans in the document; instead, they can be described in any words. Specially, a significant proportion of questions require reasoning which is a sophisticated comprehension ability to choose the right answers. As shown in Figure 1, the question asks the reason for the phenomenon on sentence S5. The answer has to be deduced over the logical relations among sentence S3, S4 and S5, and then entailed from S3 to the correct option B. Difficultly, such deduced chain is not explicitly given but expressed on text semantics. Existing methods primarily focus on document-question interaction to capture the context similarity for answer span matching (Wang et al., 2017). They have minimal capability to synthesize supported facts scattered across multiple sentences to form the evidence chain which is crucial for reasoning. To support inference, mainstream methods can be summarized into three folds. One is converting the unstructured document to formal predicate expressions, on which to perform mathematical d2242 eduction via Bayesian network or first-order logic. The conversion lacks of adequate robustness to be applicable. Another direction is to explicitly parse the document into a relation tree, on which to generate answers via hand-crafted rules (Sun et al., 2018b). However, the parser often has to cascade to the model, which is difficult to train globally and would suffer from the error propagation problem. The third method exploits memory network to imitate reasoning by multi-layer architecture and iterative attention mechanism (Weston et al., 2014). Nevertheless, the reasoning ability is insufficient due to the lack of prior structural knowledge to lead the inference direction. We observe that when humans answer the inferential question, they often finely analyze the question details and comprehend contextual relations to derive an evidence chain step by step. Using the sample in Figure 1 for illustration, humans first investigate the question to find the useful details, such as the question type “why”, and the aspect asked, i.e. “some newspapers refused delivery to distant suburbs”. Such details often play a critical role for answering. For example, why question usually expects the causal relation that could indicate the reasoning direction. Based on question details, they then carefully read the document to identify the content on which the question aspect mentions, that is, the sentence S5. Based on the content, they would deduce new supported evidences step by step guided by question type and contextual relations, such as explainable relation between S5 and S4, and casual relation among S3 and S4. By considering the options, they would decide to stop when the observed information is adequate already to answer the question. For instance, by relevant paraphrase, S3 can entail option B that may be the answer. In this process, contextual relations and multi-step deduction are efficient mechanisms for deriving the evidence chain. Based on above observations, we here propose an end-to-end approach to mimic human process for deducing the evidence chain. In particular, we first encode the given document, question and options by considering contextual information. We then tackle the inference problem by proposing a novel network that consists of a set of operational cells. Each cell is designed with structural prior to capture the inner working procedure of an elementary reasoning step, where the step can be directly inferred from the text without strong supervision. The cell includes the memory and three operating units that work in tandem. That is, master unit derives a series of attention-based operations based on the question; reader unit extracts relevant document content on the operation; and writer unit performs the operation to deduce a result and update the memory. The cells are recursively connected, where the result of the previous step acts as the context of next step. The interactions of cells are restricted by structural constraints, so as to regulate the reasoning direction. With such structural multi-step design, the network can integrate the supported facts by contextual relations to build the evidence chain in arbitrarily complex acyclic form. Since the reasoning depth is uncertain, a termination mechanism is exploited to adaptively determine the ending. Moreover, a reinforcement approach is employed for effective training. Experiments are conducted on 3 popular data sets that contain questions required reasoning skills, including MCTest, RACE and MultiRC. The results show the effectiveness of our approach. The main contributions of this paper include, • We design a new network that can answer inferential question by recursively deducing the evidence chain from the text. • We propose an effective termination mechanism which can dynamically determine the uncertain reasoning depth. • We employ a reinforcement training approach and conduct extensive experiments. The rest of this paper is organized as follows. Section 2 elaborates our approach on the inferential framework. Section 3 presents the experimental results. Section 4 reviews related work and Section 5 concludes this paper with future works. 2 Approach As shown in Figure 2, our approach consists of three components, including input representation, inferential network composed out of multiple cells, and output. Next, we define some notations, and then elaborate the details on each component. 2.1 Notations and Problem Formulation Given a document D in unstructured text, the task of machine comprehension is to answer the questions according to the semantics of the document. In this paper, multi-choice questions are our major focus. Thus, a set of plausible answer option2243 Figure 2: Overview of the our approach s are assumed to be provided, and the task is reduced to select a correct option from the given set. Formally, let q represent the question, of length S, where {w1, · · · , wS} are the question words; O = {o1, · · · , oL} denotes an option set. For a given document and question x = (D, q), a score h(x, y) ∈R is assigned for each candidate in the option set y = o ∈O, so as to measure its probability of being the correct answer. The option with highest score is outputted as the answer ˆy = argmaxy∈Oh(x, y). 2.2 Input Representation We first encode the input text into distributed vector representations by taking account the context. Question: Two stages are conducted on the encoding. (1) We convert the question into a sequence of learned word embeddings by looking up the pre-trained vectors, such as GloVe (Pennington et al., 2014). By considering the question type would help inference, we customize an embedding to indicate such type via linguistic prior knowledge, such as the positions of interrogative words are often relatively fixed, and the corresponding parts of speech (POS) are mainly adverbs or conjunctions, etc. Practically, we utilize position and POS embedding (Li et al., 2018b) generated by word embedding tool. That is, the embedding layer is a W ∈Rd×v, where d is the dimension and v denotes the number of instances. (2) We concatenate the embeddings of word, position and POS, and feed them into a bi-directional GRU (BiGRU) (Cho et al., 2014) to incorporate sequential context. Then we can yield two kinds of representations, including (a) contextual words: a series of output states cws|S s=1 that represent each word in the context of the question, where cws = [←− hs, −→ hs], ←− hs and −→ hs are the sth hidden states in the backward and forward GRU passes respectively; and (b) overall encoding: q = [←−− cw1, −−→ cwS], the concatenation of the final hidden states. Options: Each option word is embedded by pre-trained vectors and then option is contextually encoded by BiGRU to generate an overall vector. Document: Three steps are performed on the encoding. (1) We encode each document sentence by considering context via BiGRU as aforementioned. The sentence is transformed into an ni × d matrix, where ni is the size of words in the sentence i, and d is the dimension. (2) We conduct attention to compress the sentence encoding into a fixed size vector, and focus on the important components. Intuitively, long sentence may contain multiple significant parts, where each would help inference. For example, two clauses are linked by “or” with the causal relation in the sentence “The oil spill must be stopped or it will spread for miles.” The clauses and the contextual relation can assist answer the question “Why must the oil spill be stopped?” To model such situation, structured self attention technique proposed by Lin et al. (2017) is utilized. It can convert the sentence into a J ×d matrix, attending at J significant parts of the sentence in a context aware way. (3) All sentence matrices are fed into another BiGRU, so as to capture the context between the sentences. That is DH×J×d = {dsd h,j|H,J h,j=1,1}, where H is the sentences size, ds is the sentence vector. 2.3 Micro-Infer Cell Micro-infer is a recurrent cell designed to model the mechanism of an atomic reasoning step. The cell consists of one memory unit and three operational units, including the master unit, reader unit and writer unit. The memory independently 2244 Figure 3: Flow chart of the master unit stores the intermediate results obtained from the reasoning process up to the tth step. Based on the memory state, three operational units work together in series to accomplish the reasoning process. In particular, master unit analyzes the question details to focus on certain aspect via self-attention; reader unit then extracts related content, guided by the question aspect and text context; and the writer unit iteratively integrates the content with preceding results from the memory to produce a new intermediate result. The interactions between the cell’s units are regulated by structured constrains. Specially, the master outcome can only indirectly guide the integration of relevant content into the memory state by soft-attention maps and gating mechanisms. Moreover, a termination gate is introduced to adaptively determine ending of the inference. In the following, we detail the formal specifications of three operational units in the cell. 2.3.1 Master Unit As presented in Figure 3, this unit consists of two components, involving the termination mechanism and question analysis. Termination Mechanism A maximum step is set to guarantee termination. Since the complexity of the questions is different, the reasoning depths are uncertain. To dynamically adjust to such depth, a terminated gate is designed by considering two conditions. That is, the correlation between the intermediate result mt−1 and the reasoning operation at−1 in previous step, as well as mt−1 and candidate answer options ol|L l=1. When both conditions are met, an acceptable answer is highly probable to obtain. Technically, the correlations are calculated by Eq.(1), i.e. mt−1 ⊙at−1, mt−1 ⊙ol, respectively. We then combine these two factors to get tat,l, and utilize a sigmoid layer to estimate the ending probability for a certain option. By maximizing over all the options, a termination function fts(mt−1, at−1, ol|L l=1; θts) is generated, where θts is a parameter set, namely (W d×2d ta , bd ta). Based on the function, a binary random variable tt is probabilistically drawn as tt ∼p(·|fts(·; θts)). If tt is True, stop and execute the answer module accordingly; otherwise, continue the tth reasoning step. tat,l = W d×2d ta [mt−1 ⊙at−1, mt−1 ⊙ol] + bd ta fts(·; θts) = max{sigmoid(tat,l)|L l=1} (1) Question Analysis We design a soft-attention based mechanism to analyze the question and determine the basic operation performed at each step. Instead of grasping the complex meaning on the whole question at once, the model is encouraged to focus on certain question aspect at a time, making the reasoning operation can be directly inferred from the text. Three stages are performed as follows. Firstly, we project the question q through a learned linear transformation to derive the aspect related to tth reasoning step, as qt = W d×d qt q+bd qt. Secondly, we use the previously performed operation at−1 and memory result mt−1 as decision base to lead tth reasoning operation. In details, we validate previous reasoning result by leveraging the terminated conditions in Eq.(1), that is, pat = W d×Ld pa [tat,1, · · · , tat,L] + bd pa. We then integrate qt with preceding operation at−1 and validation pat through a linear transformation into aqt, as W d×3d aq [qt, at−1, pat] + bd aq. Thirdly, aqt is regulated by casting it back to original question words cws|S s=1 based on attention in Eq.(2), so as to restrict the space of the valid reasoning operations and boost the convergence rate. In particular, we calculate the correlation act,s and pass it through a softmax layer to yield a distribution avt,s over the question words. By aggregation, a new reasoning operation at is generated, represented in terms of the question words. act,s = W 1×d ac [aqt ⊙cws] + b1 ac avt,s = softmax(act,s) at = PS s=1 avt,s · cws; (2) Briefly, the new reasoning operation at is modeled by a function fna(q, at−1, cws; θna), where θna is a set of parameters, including (W d×d qt , bd qt, W d×3d aq , bd aq, W d×Ld pa , bd pa, W 1×d ac , b1 ac). 2.3.2 Reader Unit As shown in Figure 4, reader unit retrieves relevant document content that is required for per2245 Figure 4: Flow chart of the reader unit forming the tth reasoning operation. The relevance is measured by the content context in a softattention manner, taking account of the current reasoning operation and prior memory. We do not rely on external tools to facilitate globally training. To support transitive reasoning , we first extract the document content relevant to the preceding result mt−1, resulting in dmt,h,j = [W d×d m mt−1 + bd m]⊙[W d×d ds dsh,j+bd ds]. The relevance often indicates a contextual relation in the distributed space. For instance, given a question aspect why, the contents with causal relation are highly expected and their relevant score is likely to be large. Then, dmt,h,j is independently incorporated with the document content dsh,j to produce dnt,h,j, i.e. W d×2d dn [dmt,h,j, dsh,j] + bd dn. This allows us to also consider new information which is not directly related to the prior intermediate result, so as to assist parallel and inductive reasoning. Lastly, we use soft attention to select content that is relevant to the reasoning operation at and candidate options ol|L l=1. Precisely, we unify the at and ol|L l=1 by a linear transformation to obtain oat, i.e. W d×Ld oa [at, ol|L l=1] + bd oa, where the options size L is fixed and predefined. We then measure the correlation between oat and the extracted content dnt,h,j, passing the result through softmax layer to produce an attention distribution. By taking weighted average over the distribution, we can retrieve related content rit by Eq.(3). adt,h,j = W d×d ad [oat ⊙dnt,h,j] + bd ad rvt,h,j = softmax(adt,h,j) rit = PH;J h=1;j=1 rvt,h,j · dsh,j; (3) In short, the retrieved content rit is formulated by a function fri(mt−1, dsh,j, at, ol|L l=1; θri), where θri is a parameter set, involving (W d×d m , bd m, W d×d ds , bd ds, W d×2d dn , bd dn, W d×Ld oa , bd oa, W d×d ad , bd ad). 2.3.3 Writer Unit As illustrated in Figure 5, writer unit is responsible to compute the intermediate result on the tth Figure 5: Flow chart of the writer unit reasoning process and update the memory state. It integrates the retrieved content from the reader unit with the preceding intermediate result in the memory, guided by the tth reasoning operation in the master unit. Details are presented as follows. (1) Motivated by the work on relational reasoning (Santoro et al., 2017), we linearly incorporate the retrieved content rit, prior result mt−1, and question q to get mct = W d×3d mc [rit, mt−1, q] + bd mc, so as to measure their correlations. (2) By considering non-sequential reasoning, such as tree or graph style, we refer to all previous memorized results instead of just the proceeding one mt−1. Motivated by the work on scalable memory network (Miller et al., 2016), we compute the attention of the current operation at against all previous ones ai|t−1 i=1, yielding sati = softmax(W 1×d sa [at ⊙ai] + b1 sa). And then we average over the previous results mi|t−1 i=1 to get preceding relevant support as mpt, that is Pt−1 i=1 sati · mi. By combining mpt with correlated result mct above, we can obtain a plausible result mut, namely W d×d mp mpt + W d×d mc mct + bd mu. (3) The operations on some question aspects such as why need multi-step reasoning and updating while others no need. In order to regulate the valid reasoning space, an update gate is introduced to determine whether to refresh the previous result mt−1 in the memory by the new plausible result mut. The gate αt is conditioned on the operation at by using a learned linear transformation and a sigmoid function. If the gate is open, the unit updates the new result to the memory, otherwise, it skips this operation and performs the next one. αt = sigmoid(W 1×d a at + b1 a) mt = αt · mt−1 + (1 −αt) · mut; (4) In brief, the new reasoning result mt is modeled by a function fnm(mt−1, rit, q, at; θnm), where θnm is a parameter set, including (W d×3d mc , bd mc, W 1×d sa , b1 sa, W d×d mp , W d×d mc , bd mu, W 1×d a , b1 a). 2246 2.4 Output and Training After the terminated condition is met, we can obtain the memory state mt−1, which indicates the final intermediate result of the reasoning process. For the multi-choice questions focused in the paper, there is a fixed set of possible answers. We then leverage a classifier to predict an answer by referring to the question q and options ol|L l=1. Precisely, we first measure the correlation of mt−1 against q and ol|L l=1, to get mt−1 ⊙q, mt−1 ⊙ol. By concatenation, we pass the outcome through a 2-layer fully-connected softmax network to derive an answer option by Eq.(5), with ReLU activation function to alleviate over-fitting. In summary, the parameter set θans is (W d×2d u , bd u, W 1×Ld an , b1 an). ul = ReLU(W d×2d u [mt−1 ⊙q, mt−1 ⊙ol] + bd u) Anst = softmax(W 1×Ld an [u1, · · · , uL] + b1 an) (5) Reinforcement Learning Due to the discrete of the termination steps, the proposed network could not be directly optimized by back-propagation. To facilitate training, a reinforcement approach is used by viewing the inference operations as policies, including the reasoning operation flow G1:T , termination decision flow t1:T and answer prediction AT , where T is the reasoning depth. Given ith training instance ⟨qi; Di; oi⟩, the expected reward r is defined to be 1 if the predicted answer is correct, otherwise 0. The rewards on intermediate steps are 0, i.e. {rt = 0}|T−1 t=1 . Each probable value pair of (G; t; A) corresponds to an episode, where all possible episodes denote as A†. Let J(θ) = Eπ hPT t=1 rt i be the total expected reward, where π(G, t, A; θ) is a policy parameterized by the network parameter θ, involving the encoding matrices θW , question network θna, termination gate θts, reader network θri, writer network θnm, and answer network θans. To maximize the reward J, we explore gradient descent optimization, with Monte-Carlo REINFORCE (Williams, 1992) estimation by Eq.(6). ∇θJ(θ) = Eπ(G,t,A;θ) [∇θ log π(G, t, A; θ)(r −b)] = P (G,t,A)∈A† π(G, t, A; θ)[∇θ log π(G, t, A; θ)(r −b)] (6) where b is a critic value function. It is usually set as P (G,t,A) π(G, t, A; θ)r (Shen et al., 2016) and (r/b −1) is often used instead of (r −b) to achieve stability and boost the convergence speed. 3 Evaluations In this section, we extensively evaluate the effectiveness of our approach, including comparisons with state-of-the-arts, and components analysis. 3.1 Data and Experimental Setting As shown in Table 1, experiments were conducted on 3 popular data sets in 9 domains, including MCTest, RACE and MultiRC. Different from data sets such as bAbI (Weston et al., 2015) that are synthetic, the questions in the evaluated data sets are high-quality to reflect real-world applications. Data set #doc #q #domain ratio MCTest 660 2,640 1 54.2% MC160 160 640 1 53.3% MC500 500 2,000 1 54.6% RACE 27,933 97,687 1 25.8% RACE-M 7,139 20,794 1 22.6% RACE-H 28,293 69,394 1 26.9% MultiRC 871 9,872 7 59.0% Table 1: Statistics of the data sets. #doc, #q denote the size of the documents and questions accordingly; ratio means the proportion of the questions that require reasoning on multiple sentences; MC160 is a human double-check subset of MCTest, while MC500 is an unchecked one; RACE-M and RACE-H are the subsets of RACE on middle/ high school exams, respectively Hyper-parameters were set as follows. For question encoding, the POS tags were obtained by using OpenNLP toolkit. Multiple cells were connected to form the network, where the cells were weight sharing. The maximum size of connected cells length was 16. The network was optimized via Adam (Kingma and Ba, 2014) with a learning rate of 10−4 and a batch size of 64. We used gradient clipping with clipnorm of 8, and employed early stopping based on the validation accuracy. For word embedding, we leveraged 300-dimension pre-trained word vectors from GloVe, where the word embeddings were initialized randomly using a standard uniform distribution and not updated during training. The out-of-vocabulary words were initialized with zero vectors. The number of hidden units in GRU was set to 256, and the recurrent weights were initialized by random orthogonal matrices. The other weights in GRU were initialized from a uniform distribution between −0.01 and 0.01. We maintained the exponential moving averages on the model weights with a decay rate of 0.999, and used them at test time instead of the raw weights. Variational dropout of 0.15 was used across the network and 2247 maximum reasoning step was set to 5. Training usually converged within 30 epochs. 3.2 Comparisons with the State-of-the-Arts We compared our approach with all published baselines at the time of submission on the evaluated data sets. The baselines were summarized as follows. (1) On RACE data set, six baselines were employed, including three introduced in the release of the data set, that is Sliding Window (Richardson et al., 2013), Stanford AR (Chen et al., 2016), and GA (Dhingra et al., 2016); another three methods proposed recently, namely DFN (Xu et al., 2017), BiAttention 250d MRU(Tay et al., 2018), and OFT (Radford et al., 2018). (2) For MCTest data set, nine baselines were investigated, involving four on lexical matching, i.e. RTE, SWD, RTE+SWD Richardson et al. (2013), Linguistic (Smith et al., 2015); two methods used hidden alignment, that is Discourse (Narasimhan and Barzilay, 2015), Syntax (Wang et al., 2015); three approaches based on deep learning, i.e. EK (Wang et al., 2016), PH (Trischler et al., 2016b), and HV (Li et al., 2018a). (3) Regarding multi-choices questions in MultiRC data set, we replace softmax to sigmoid at the answer generation layer, so as to make prediction on each option. Accordingly, five baselines were exploited, including three used in the release of the data set, that is IR, SurfaceLR, and LR (Khashabi et al., 2018); two methods currently composed, namely OFT (Radford et al., 2018) and Strategies (Sun et al., 2018a). As elaborated in Figure 6, our approach outperformed the individual baselines on all three data sets 1. Specifically, for RACE data set, our approach achieved the best performance and outperformed the second one (i.e. OFT) in terms of average accuracy by over 4.12%, 5.00% on RACE-M and RACE-H, respectively. On MCTest data set, the outperformance was 5.55%, 7.14% over PH baseline which was the second best on MC160multi and MC500-multi, respectively, where multi is a subset of the data set that is more difficult and needs understanding multiple sentences to answer. For MultiRC data set, our approach led to a performance boost against the second best one (i.e. Strategies) in terms of macro-average F1 by over 4.06%, while in terms of micro-average 1The leaderboard rankings were quickly refreshed, but our performance is still competitive at the camera-ready time. Figure 6: Comparisons of our approach against stateof-the-arts on the RACE, MCTest, and MultiRC data sets respectively. Statistical significant with pvalues<0.01 using two-tailed paired test F1 and exact match accuracy by over 5.20% and 6.64%, respectively. Such results showed that our approach with structural multi-step design and context aware inference can correctly answer the questions, especially the non-trivial ones required reasoning, thus boost the overall performance. 3.3 Ablations Studies To gain better insight into the relative contributions of various components in our approach, empirical ablation studies were performed on seven aspects, including (1) position and POS aware embedding on the question; (2) structural selfattention in document encoding; (3) two in the master unit, that is, guiding the reasoning operation by previous memory result, and casting back to original question words; (4) extract2248 ing relevant content based on preceding memory result in reader unit; (5) two in the writer unit, namely, non-sequential reasoning and updating gate mechanisms. They were denoted as pos aware, doc self att, rsn prior mem, que w reg, prior mem res, non seq rsn, and udt gate, respectively. Figure 7: Ablation studies on various components of our approach for affecting the performance As displayed in Figure 7, the ablation on all evaluated components in our approach led to the performance drop. The drop was more than 10% on four components, including (1) rsn prior mem; Lack of the memory guidance, the inferred result from previous step could not be served as context for the next. Losing such valuable context may lead to the misalignment of the reasoning chain. (2) prior mem res; Discard of the preceding memory result, the relevant content with contextual relations would not be identified. Such relations are the key for transitive reasoning. (3) que w reg; Without casting back to original question words, Figure 8: Evaluation on the termination mechanism it is equivalent to processing the complex question at one step without identifying the details. Such coarse-grained processing fails to effectively regulate the space of the valid reasoning operations, and may confuse the reasoning direction. (4) udt gate; The gate could help balance the complex and simple questions, and reduce long-range dependencies in the reasoning process by skipping, which would improve performance. These results further convinced us on the significant value of imposing strong structural priors to help the network derive the evidence chain from text. Furthermore, we evaluated the efficiency of the termination mechanism by replacing it with fixed steps from 1 up to 5. The results on the RACE and MCTest data sets showed the replacement would lead to drop on average accuracy and slowdown on the convergence rate. As demonstrated in Figure 8, for fixed size reasoning, more steps performed well at first, but deteriorated soon, while dynamic strategy can adaptively determine the optimal termination, that may help boost the accuracy. 3.4 Case Study Due to the use of soft attention, the proposed network offers a traceable reasoning path which can interpret the generation of the answer based on the attended words. To better understand the reasoning behavior, we plotted the attention map over the document, question and options in Figure 9 with respect to the sample on Figure 1. From the sequence of the maps, we observed that the network adaptively decided which part of an input question should be analyzed at each hop. For example, it first focused on the question aspect “some newspapers refused delivery to distant suburbs.” Then it generated evidence attended at S5 regarding to the focused aspect by similarity. Subsequently, the aspect “why” was focused and evidence attended at S4 was identified. We may infer that since 2249 Figure 9: Visualized attention map on figure 1 sample S4 and previous intermediate result S5 contain the explainable relation, they would most likely be correlated in the distributed space with sentencelevel context aware encoding. Later, “why” was re-focused, the evidence attended at S3 was derived. Finally, option B was attended and the process ended due to termination unit may be triggered to work. Such results showed the network can derive the answer by capturing underlying semantics of the question and sequentially traversing the relations on document based on the context. 4 Related Work Earlier studies on machine comprehension mainly focused on the text span selection question. It is often transformed into a similarity matching problem and solved by feature engineeringbased methods (Smith et al., 2015) or deep neural networks. The classical features include lexical features (e.g. overlapping of words, Ngram, POS tagging) (Richardson et al., 2013), syntactic features (Wang et al., 2015), discourse features (Narasimhan and Barzilay, 2015), etc. Besides, the typical networks involve Stanford AR (Chen et al., 2016), AS Reader (Kadlec et al., 2016), BiDAF (Seo et al., 2016), MatchLSTM (Wang and Jiang, 2017), etc, which used distributed vectors rather than discrete features to better compute the contextual similarity. To support inference, existing models can be classified into three categories, including predicate based methods (Richardson and Domingos, 2006), rule-based methods relied on external parser (Sun et al., 2018b) or pre-built tree (Yu et al., 2012), and multi-layer memory networks (Hill et al., 2015), such as gated attended net (Dhingra et al., 2016), double-sided attended net (Cui et al., 2016), etc. These models either lack end-to-end design for global training, or no prior structure to subtly guide the reasoning direction. On the topic of multi-hop reasoning, current models often have to rely on the predefined graph constructed by external tools, such as interpretable network (Zhou et al., 2018) on knowledge graph. The graph plainly links the facts, from which the intermediate result in the next hop can be directly derived. However, in this paper, the evidence graph is not explicitly given by embodied in the text semantics. Another related works are on Visual QA, aiming to answer the compositional questions with regards to a given image, such as “What color is the matte thing to the right of the sphere in front of the tiny blue block?” In particular, Santoro et al. (2017) proposed a relation net, yet the net was restricted to relational question, such as comparison. Later, Hudson and Manning (2018) introduced an iterative network. The network separated memory and control to improve interpretability. Our work leverages such separated design. Different from previous researches, we dedicate to inferential machine comprehension, where the question may not be compositional, such as why question, but requires reasoning on an unknown evidence chain with uncertain depth. The chain has to be inferred from the text semantics. To the best of our knowledge, no previous studies have investigated an end-to-end approach to address this problem. 5 Conclusions and Future Works We have proposed a network to answer generic questions, especially the ones needed reasoning. We decomposed the inference problem into a series of atomic steps, where each was executed by the operation cell designed with prior structure. Multiple cells were recursively linked to produce an evidence chain in a multi-hop manner. Besides, a terminated gate was presented to dynamically determine the uncertain reasoning depth and a reinforcement method was used to train the network. Experiments on 3 popular data sets demonstrated the efficiency of the approach. Such approach is mainly applied to multiple-choice questions now. In the future, we will expand it to support the questions on text span selection by using the relation type rather than the option as the terminated condition. For example, given the why question, reasoning process should be stopped when unrelated relation is met, such as transitional relation. Acknowledgments This work is supported by the National Key R&D Program of China (2018YFB1004404), Key R&D Program of Guangdong Province (2018B010107005, 2019B010120001), National Natural Science Foundation of China (U1711262, U1401256, U1501252, U1611264, U1711261). 2250 References D. Chen, J. Bolton, and C.D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th ACL, volume abs/1606.02858. K. Cho, B.V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning phrase representations using rnn encoderdecoder for statistical machine translation. In Proceedings of EMNLP, volume abs/1704.04683. Y. Cui, Z. Chen, S. Wei, S. Wang, T. Liu, and G. Hu. 2016. Attention-over-attention neural networks for reading comprehension. In Proceedings of the 55th ACL, volume abs/1607.04423. B. Dhingra, H. Liu, W. W.Cohen, and R. Salakhutdinov. 2016. Gated-attention readers for text comprehension. In Proceedings of the 55th ACL, volume abs/1606.01549. K.M. Hermann, T. Kocisk´y, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of NIPS, volume abs/1506.03340. F. Hill, A. Bordes, S. Chopra, and J. Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. In Journal of Computer Science, abs/1511.02301. D.A. Hudson and C.D. Manning. 2018. Compositional attention networks for machine reasoning. In Proceedings of ICLR, volume abs/1803.03067. R. Kadlec, M. Schmid, O. Bajgar, and J. Kleindienst. 2016. Text understanding with the attention sum reader network. In Proceedings of the 54th ACL, volume abs/1603.01547. D. Khashabi, S. Chaturvedi, M. Roth, S. Upadhyay, and D. Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of NAACL-HLT, pages 252–262. D.P. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of ICLR, volume abs/1412.6980. G. Lai, Q. Xie, H. Liu, Y. Yang, and E.H. Hovy. 2017. RACE: large-scale reading comprehension dataset from examinations. In Proceedings of EMNLP, volume abs/1704.04683. C. Li, Y. Wu, and M. Lan. 2018a. Inference on syntactic and semantic structures for machine comprehension. In Proceedings of AAAI, pages 5844–5851. L. Li, Y. Liu, and A. Zhou. 2018b. Hierarchical attention based position-aware network for aspect-level sentiment analysis. In Proceedings of CoNLL, volume abs/1704.04683, page 181189. Z. Lin, M. Feng, C.N. Santos, M. Yu, B. Xiang, B. Zhou, and Y. Bengio. 2017. A structured selfattentive sentence embedding. In Proceedings of ICLR, volume abs/1703.03130. A.H. Miller, A. Fisch, J. Dodge, A. Karimi, A. Bordes, and J. Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 54th ACL, volume abs/1606.03126. K. Narasimhan and R. Barzilay. 2015. Machine comprehension with discourse relations. In Proceedings of the 53rd ACL, pages 1253–1262. J. Pennington, R. Socher, and C.D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. 2018. Improving language understanding by generative pre-training. In eprint. P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. volume abs/1606.05250. M. Richardson, C. Burges, and E. Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of EMNLP, pages 193–203. M. Richardson and P. Domingos. 2006. Markov logic networks. In Journal of Machine Learning, 62(12):107–136. A. Santoro, D. Raposo, D.G.T. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T.P. Lillicrap. 2017. A simple neural network module for relational reasoning. In eprint arXiv:1706.01427, abs/1706.01427. M.J. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. In Proceedings of ICLR, volume abs/1611.01603. Y. Shen, P. Huang, J. Gao, and W. Chen. 2016. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD, volume abs/1609.05284. E. Smith, N. Greco, M. Bosnjak, and A. Vlachos. 2015. A strong lexical matching method for the machine comprehension test. In Proceedings of EMNLP, pages 1693–1698. K. Sun, D. Yu, D. Yu, and C. Cardie. 2018a. Improving machine reading comprehension with general reading strategies. In eprint arXiv:1810-13441, abs/1810.13441. Y. Sun, G. Cheng, and Y. Qu. 2018b. Reading comprehension with graph-based temporal-casual reasoning. In Proceedings of COLING, pages 806–817. Y. Tay, L.A. Tuan, and S.C. Hui. 2018. Multi-range reasoning for machine comprehension. In eprint arXiv:1803.09074, abs/1803.09074. 2251 A. Trischler, Z. Ye, X. Yuan, and K. Suleman. 2016a. Natural language comprehension with the epireader. In Proceedings of EMNLP, volume abs/1606.02270. A. Trischler, Y. Zheng, X. Yuan, H. Jing, and P. Bachman. 2016b. A parallel-hierarchical model for machine comprehension on sparse data. In Proceedings of the 54th ACL, pages 432–441. B. Wang, S. Guo, L.L. Kang, S. He, and J. Zhao. 2016. Employing external rich knowledge for machine comprehension. In Proceedings of IJCAI. H. Wang, M. Bansal, K. Gimpel, and D. McAllester. 2015. Machine comprehension with syntax, frames, and semantics. In Proceedings of the 53rd ACL, pages 700–706. S. Wang and J. Jiang. 2017. Machine comprehension using match-lstm and answer pointer. In Proceedings ICLR, volume abs/1608.07905. W. Wang, N. Yang, F. Wei, B. Chang, and M. Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th ACL, pages 189–198. J. Weston, A. Bordes, S. Chopra, and T. Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. In Proceedings ICLR, volume abs/1502.05698. J. Weston, S. Chopra, and A. Bordes. 2014. Memory networks. In eprint arXiv:1503.08895, abs/1410.3916. R.J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. In Journal Machine Learning, 8(34):229–256. Y. Xu, J. Liu, J. Gao, Y. Shen, and X. Liu. 2017. Dynamic fusion networks for machine reading comprehension. In eprint arXiv:1711.04964, abs/1711.04964. J. Yu, Z.J. Zha, and T.S. Chua. 2012. Answering opinion questions on products by exploiting hierarchical organization of consumer reviews. In Proceedings of EMNLP, volume abs/1704.04683. M. Zhou, M. Huang, and X. Zhu. 2018. An interpretable reasoning network for multi-relation question answering. In Proceedings COLING, volume abs/1801.04726.
2019
217
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2252–2262 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2252 Token-level Dynamic Self-Attention Network for Multi-Passage Reading Comprehension Yimeng Zhuang, Huadong Wang Samsung Research China - Beijing (SRC-B) {ym.zhuang, huadong.wang}@samsung.com Abstract Multi-passage reading comprehension requires the ability to combine cross-passage information and reason over multiple passages to infer the answer. In this paper, we introduce the Dynamic Self-attention Network (DynSAN) for multi-passage reading comprehension task, which processes cross-passage information at token-level and meanwhile avoids substantial computational costs. The core module of the dynamic self-attention is a proposed gated token selection mechanism, which dynamically selects important tokens from a sequence. These chosen tokens will attend to each other via a self-attention mechanism to model long-range dependencies. Besides, convolutional layers are combined with the dynamic self-attention to enhance the model’s capacity of extracting local semantic. The experimental results show that the proposed DynSAN achieves new state-of-the-art performance on the SearchQA, Quasar-T and WikiHop datasets. Further ablation study also validates the effectiveness of our model components. 1 Introduction As a critical approach for evaluating the ability of an intelligent agent to understand natural language, reading comprehension (RC) is a challenging research direction, attracting many researchers’ interest. In real application scenarios, such as web search, the passages may be multiple and extended, and may be comprised of relevant and irrelevant contents. It involves the problem of multi-passage reading comprehension. In multi-passage setting, cross-passage information interaction is vital for modeling long-range dependencies, co-references between entities in different passages (Dhingra et al., 2018), crosspassage answer verification (Wang et al., 2018b), and multihop reasoning (Welbl et al., 2018), etc. Great efforts have been made to develop models for multi-passage task, such as Wang et al. (2018b); Zhong et al. (2019); Dehghani et al. (2019a); Dhingra et al. (2018); De Cao et al. (2019); Song et al. (2018). The common practice of these approaches is that all the embeddings in a passage or a span are integrated into a single vector and the cross-passage information interactions are based on these coarse-grain semantic representations. However, it may cause potential issues. As is pointed out in Bahdanau et al. (2015); Cho et al. (2014), compressing all the necessary information into a single vector may lead to “sacrifice” some critical information due to the allocated capacity to remember other information. This problem is prevalent in Neural Machine Translation (NMT), the recent models, such as the Transformer (Vaswani et al., 2017), workaround this issue by decoding on token-level context encodings of the source text. As such, we hypothesize that fine-grain representations may keep precise semantic information, and may be beneficial to cross-passage information interactions in RC tasks. In this paper, we focus on an architecture which deals with the cross-passage information at token-level. The proposed architecture is a variant of the Self-attention Network (SAN) (Vaswani et al., 2017; Shen et al., 2018a). Our model employs a self-attention mechanism to combine tokenlevel supportive information from all passages in a multi-step process. Directly applying selfattention over all tokens is computationally expensive. Instead, in each step, the most important tokens are dynamically selected from all passages, and information interaction only happens over these chosen tokens via the self-attention mechanism. The motivation behind it is an observation that the information used to answer the question is usually concentrated on a few words. 2253 Our experiments verify this observation to a certain extent. We expect that our model can automatically find out these important tokens. Thus we propose a gated token selection mechanism and equip it with the self-attention module. We intend the model to achieve a balance in speed, memory, and accuracy. While the selfattention mechanism is widely used in end-to-end models to capture long-range dependency, it is intrinsically inefficient in memory usage. Shen et al. (2018b) elaborates the memory issue. The memory required to store the attention matrix grows quadratically with the sequence length. Considering real scenarios, such as web search, in which the retrieval system returns hundreds of articles, and each contains hundreds or thousands of words, thus applying self-attention on all tokens in the supporting passages is computationally expensive. Compared to recurrent neural networks, such as LSTM (Hochreiter and Schmidhuber, 1997), SAN is highly parallelizable and usually faster on long sequence (Vaswani et al., 2017). The proposed method accomplishes necessary cross-passage information interaction with a time/memory complexity linear in the length of the sequence and do not add much extra calculation burden. Our contributions in this work are as follows: (1) We propose Dynamic Self-attention (DynSA) for information interaction in a long sequence. (2) Token-level cross-passage information interaction is implemented through the application of the proposed DynSA at relatively less computational costs. (3) Our Dynamic Self-attention Network (DynSAN) achieves new state-of-the-art performance compared with previously published results on SearchQA, Quasar-T and WikiHop benchmarks. 2 Dynamic Self-attention Block This section introduces the Dynamic SelfAttention Block (DynSA Block), which is central to the proposed architecture. The overall architecture is depicted in Figure 1. The core idea of this module is a gated token selection mechanism and a self-attention. We expect that a gate can acquire the estimation of each token’s importance in an input sequence, and use this estimated importance to extract the most important K tokens. Then we run a self-attention, instead of computing the full self-attention matrix over all the tokens, only the chosen K tokens are x1 x2 x3 xL Conv u1 u2 u3 uL Non-linear g g Top K ui ui Scaled DotProduct Attention Linear y1 y2 y3 yL ai ai Output Input Linear H Repeat Pad h,1 h,K h,1 h,K h,1 h,L Figure 1: Architecture of the Dynamic Self-Attention Block. taken into account. This module results in lower memory consumption and makes the self-attention focus on the active part of a long input sequence. The above idea is implemented through stacking two structures: a local encoder and a dynamic selfattention module. 2.1 Local Encoder In the architecture, a local encoder is used to encode local information, such as short-range context, which is useful for disambiguation. The reasons for the local encoder are that (1) only computing self-attention over a few tokens among a long sequence may lead the self-attention to lose the capability of modeling short-range context for every position in the sequence, and (2) after a position receives the attended information from long-range positions, the local encoder is needed to spread this information to its neighboring positions, and (3) previous works have proven that combining a local encoder with self-attention is beneficial in some tasks (Yu et al., 2018). A natural candidate for the local encoder is local convolution, which is widely used as local feature extractors. Besides, restricted self-attention (Vaswani et al., 2017) is also a choice. In this work, we adopt 1D convolution as the local encoder. Specifically, let X ∈RD×L be the in2254 put matrix of an L-token sequence, and each token embedding is D-dimensional. The output of a convolutional layer is calculated with a residual connection: Conv(LN(X)) + X, where LN is the layer normalization (Ba et al., 2016), Conv denotes a convolutional layer. For less computational costs, we adopt depth-wise separable convolutions (Chollet, 2017) throughout this paper. The local encoder consists of a stack of 2 convolutional layers. 2.2 Dynamic Self-attention Since our self-attention is performed over a set of tokens which are determined dynamically, we call it Dynamic Self-Attention (DynSA). The DynSA is based on the hypothesis that the number of important tokens is much less than the sequence length in a long sequence. Here, to say a token is important means that the token contains the necessary information to enable the model to predict the answer, or the token is non-negligible for modeling long-range semantic dependency. DynSA intends to find out the most important tokens by a token selection mechanism and then performs a self-attention only over these chosen tokens. In DynSA, we use a gate to control how much of the output, which includes non-linear transformations and attended vectors, to pass this layer. A large gate activation value implies that the corresponding output is important in this layer. Thus, we use the gate activation as the basis of token selection. Given the output of the local encoder U ∈RD×L, the gate activation is computed via: G = FG(FU(U)) (1) where FU denotes a non-linear fully connected layer, FG denotes an affine transformation with sigmoid activation function. In our work, we allow to use multi-head attention (Vaswani et al., 2017). Equation 1 outputs G ∈RH×L, which contains H heads. And we use gh ∈RL (the h-th row in G) to represent the gate output of the h-th head. The element gh,i in gh is the gate activation corresponding to the token at the i-th position. Then, in each head we select the top K tokens according to their corresponding gate activations in gh, in which K is a hyper-parameter. In case of the actual sequence length being less than K, we select all the tokens. We get the chosen tokens’ embeddings Uh = [uih,1, · · · , uih,j, · · · , uih,K] ∈ RD×K, where ih,j ∈{1, 2, · · · , L} is the position index of the chosen token in the input sequence. We consider this as a gated token selection mechanism. Scaled dot-product attention is adopted over the chosen tokens: Ah = softmax( QhKT h p D/H ) · Vh (2) where Qh ∈R D H ×K, Kh ∈R D H ×K, and Vh ∈ R D H ×K are query, key, and value respectively, they are linear projections of the input Uh. Ah ∈ R D H ×K is the attended output matrix of the h-th head. Next, we pad those unchosen positions with zero embeddings to complete the sequence length. Having A∗ h = Pad(Ah) ∈R D H ×L. The output of the h-th head Zh ∈R D H ×L is calculated as follows, Zh = (Fh + A∗ h) · gh max(gh) (3) Fh = FH h (FU(U)) (4) where FH h is an affine layer. Equation 4 produces a non-linear transformation Fh ∈R D H ×L of the input embeddings. Since zero embeddings are padded at unchosen positions, by adding Fh gradient vanishing can be avoided when updating the parameters of the gate in training phase. In Equation 3, the maximum operation aims to select the maximum element in vector gh, and the division operation normalizes these elements so that the maximum activation is always one. Finally, the output Y ∈RD×L of a DynSA block is the fusion of all heads. Y = FY ([Z1; · · · ; ZH]) + U (5) in which, FY denotes a linear projection, [·; ·] is the concatenation of the outputs of all heads. Optionally, we suggest adding a regularization on the gate activation to make it more sparse, so that those unimportant tokens’ activation values are almost zero and let the model generate more discriminative gate activation. Experiments show that the regularization can produce small gains in performance. Specifically, we jointly optimize the following regularization term when training the model. L∗= β · ||G||1 (6) where G represents the gate activation, || · ||1 denotes 1-norm. β is a small hyper-parameter, which is set to 10−5 in our experiments. 2255 Question Passage 1 Passage M Highway DynSA Block DynSA Block DynSA Block Align Linear Align Linear DynSA Block DynSA Block … … Stacked DynSA Blocks Prediction Layer Answer … … Highway Highway … … Word Character Word Character Figure 2: Architecture of Dynamic Self-Attention Network (DynSAN) for multi-passage reading comprehension. 3 Token-level Dynamic Self-attention Network This section introduces the application of our proposed Dynamic Self-attention Network (DynSAN) on the multi-passage RC task. Given a question and M passages, it requires the model to predict a span from the passages to answer the question. Figure 2 illustrates the architecture of DynSAN. 3.1 Input Encoding At the bottom of DynSAN, the input texts are first converted into distributional representations. We use the concatenation of word embeddings and character encodings for every single token. For word embedding, we adopt the pre-trained 300dimensional fasttext Mikolov et al. (2018) word embeddings and fix them during training. Character encodings are obtained by performing convolution and max-pooling on 15-dimensional randomly initialized character embeddings following (Kim, 2014). Character embeddings are trainable while word embeddings are fixed in the training phase. On top of the embeddings, we adopt a 2layer highway network (Srivastava et al., 2015) for deep transformation. The output of the highway network is immediately mapped to D dimensions through a linear projection, and we add sinusoidal positional embeddings (Vaswani et al., 2017) to the vectors for each token to expose position information to the model. Then, the vectors are fed into a layer of DynSA blocks. These DynSA Blocks are in charge of independently encoding context information inside the question and every passage, in which the parameters of DynSA blocks are shared in the layer. We use DynSA rather than the full multi-head self-attention to avoid massive memory consumption caused by exceptionally long passages. 3.2 Alignment Alignment is a common and necessary step to generate question-aware context vectors for each passage, here, we adopt the strategy used in Yu et al. (2018), in which it includes a trilinear co-attention (Weissenborn et al., 2017) and a heuristic combination with query-to-context (Seo et al., 2017). Due to the limited space, we encourage reading the references for detailed descriptions and omit the repeated introduction. Then, the question-aware context vectors are projected into the standard dimension D through a linear layer and are encoded by a layer of DynSA blocks again to build semantic representations inside each passage further. 3.3 Cross-Passage Attention Thus far, each passage aligns with the question independently, and DynSA blocks generate contextual embeddings inside each passage independently, so there is no interaction between passages. For multi-passage reading comprehension, cross-passage information interaction is beneficial to solve the problems, such as multihop reasoning, and multi-passage verification. Previous works either omit the cross-passage interaction (Clark and Gardner, 2018) or implement it at a relatively 2256 coarse granularity (Dehghani et al., 2019a). For example, in Dehghani et al. (2019a), each passage is encoded into a singular vector and self-attention is performed over these passage vectors. Instead of passage-level or block-level interaction (Shen et al., 2018b), in this work, we focus on modeling cross-passage long-range dependencies at tokenlevel through a cross-passage attention layer. We expect that fine-grain self-attention may keep precise semantic information. This layer consists of N stacked DynSA blocks. Specifically, as is shown in Figure 2, we concatenate the vector sequences of all passages end to end, and then stack N layers of DynSA blocks on top of this long vector sequence. If these passages are given in order, for instance, the passages have been ranked by a search engine, we add a rank embedding to each passage before the concatenation. The rank embeddings are randomly initialized, and the i-th rank embedding is added to every token vector in the i-th ranked passage. 3.4 Prediction Layer The prediction layer is used to extract the answer span based on the output of previous layers. Depend on the type of tasks, different architectures are chosen. In this work, we investigate extractive QA and multiple choice QA. 3.4.1 Extractive QA Extractive QA is challenging since we have to extract the answer span from the passages without any given candidate answer. In this paper, we adopt the Hierarchical Answer Spans (HAS) model (Pang et al., 2019) to solve this problem. Details are included in Pang et al. (2019), and we do not repeat it here due to limited space. In our implementation, the differences to Pang et al. (2019) are that the start/end probability distribution is calculated over all tokens as in Equation 7, RNN is replaced with DynSA block, and the paragraph quality estimator mentioned in Pang et al. (2019) is not used. 3.4.2 Multiple Choice QA In this type of task, a list of candidate answers is provided. Here, we assume S ∈RD×L as the output of the cross-passage attention layer, L represents the total length of the M passages, q denotes the question, and P = {p1, · · · , pM} denotes the set of passages. We first convert the token vectors into a probability distribution r ∈RL over all tokens, r = softmax(FS(S)) (7) where FS is a linear projection. The probability of choosing a candidate c as the answer is computed via: P(c|q, P) = X i∈Tc ri (8) where Tc is a set of positions where the candidate c’s mentions appear. During training, we optimize the log-likelihood of choosing the correct answer’s probability. 4 Experiments 4.1 Datasets We conduct experiments to study the performance of the proposed approach on three publicly available multi-passage RC datasets. SearchQA (Dunn et al., 2017) is an open domain QA dataset including about 140k questions crawled from J! Archive, and about 50 web page snippets, which are retrieved from the Google search engine, as the supporting passages for each question. The authors of SearchQA have provided a processed version of this dataset, in which all words are lower-cased, and tokenization has been completed. Our experiments are based on this processed version. Quasar-T (Dhingra et al., 2017) is an open domain QA dataset including about 43k trivia questions collected from various internet sources, and 100 supporting passages for each question. These supporting passages are given in an order ranked by a search engine. WikiHop (Welbl et al., 2018) is a multiple choice QA dataset constructed using a structured knowledge base. One has to submit the model and work with the author to obtain the test score. For this dataset, a binary feature is concatenated with word embeddings and character embeddings to indicate whether a token is belong to any candidate answers. The above three datasets have their official train/dev/test sets, so we do not split them by ourselves. Some of the above datasets provide additional meta-data, we do not use this additional information in our experiments. We observe that those low-ranked passages play a critical role in improving the accuracy, thus we remain all supporting passages as the inputs of our 2257 Model SearchQA Quasar-T EM F1 EM F1 DrQA 41.9 48.7 37.7 44.5 R3 49.0 55.3 35.3 41.7 TraCRNet 52.9 65.1 43.2 54.0 Shared-Norm 59.8 67.1 38.6 45.4 HAS-QA 62.7 68.7 43.2 48.9 DynSAN 64.2 70.3 48.0 54.8 Human 43.9 – 51.5 60.6 Table 1: Performance of DynSAN and competing approaches on the test sets of two extractive QA tasks: SearchQA and Quasar-T. Competing approaches include DrQA (Chen et al., 2017), R3 (Wang et al., 2018a), TraCRNet (Dehghani et al., 2019a), SharedNorm (Clark and Gardner, 2018), HAS-QA (Pang et al., 2019). Human performance is referenced from the dataset paper. model. The averages/medians of the total length of the concatenation of all supporting passages for each question are around 1.9k/2k, 2.4k/2.4k, and 1.2k/1k in SearchQA, Quasar-T, and WikiHop respectively. Thus, we limit the maximum length not to exceed 5k tokens and discard a few exceptionally long cases. Tokenization is completed using spaCy 1 during preprocessing. 4.2 Experimental Setup In the DynSAN, the kernel size is 7 for all convolutional layers, the standard dimension D is 128, the number of heads H is 8, the number of chosen tokens K is 256. In the cross-passage attention layer, we stack N = 4 layers of DynSA blocks. The mini-batch size is set to 32. For regularization, we adopt dropout between every two layers and the dropout rate is 0.1. Adam (Kingma and Ba, 2015) with learning rate 0.001 is used for tuning the model parameters. We use a learning rate warm-up scheme in which the learning rate increases linearly from 0 to 0.001 in the first 500 steps. The models for multi-passage reading comprehension are trained on four 12GB K80 GPUs using synchronous SGD (Das et al., 2016). Exponential moving average is adopted with a decay rate 0.9999. 4.3 Main Results The performance of our model and competing approaches are summarized in Table 1 and Table 2. For extractive QA, standard metrics are utilized: 1https://spacy.io Model Dev Test BiDAF (Seo et al., 2017) – 42.9 Coref GRU (Dhingra et al., 2018) 56.0 59.3 MHQA-GRN (Song et al., 2018) 62.8 65.4 Entity-GCN (De Cao et al., 2019) 64.8 67.6 CFC (Zhong et al., 2019) 66.4 70.6 DynSAN 70.1 71.4 Human (Welbl et al., 2018) – 74.1 Table 2: Performance of DynSAN and competing approaches on multiple choice QA dataset: WikiHop. Exact Match (EM) and F1 score (Rajpurkar et al., 2016). The scores are evaluated by the official script in Rajpurkar et al. (2016). For multiple choice QA, the performance is evaluated by the accuracy of choosing the correct answer. As we can see, the proposed model clearly outperforms all previously published approaches and achieves new state-of-the-art performances on the three datasets, which validates the effectiveness of the dynamic self-attention network for multi-passage RC. It is noteworthy that competing approaches use coarse-grain representations for cross-passage information interaction or omit cross-passage information interaction entirely. Ablation EM F1 Full architecture 64.2 70.3 (a) −Cross-passage attention 55.1 60.9 (b) −Self-attention 59.5 65.6 (c) −Convolutional layers 61.0 67.4 (d) −Gated token selection 60.5 66.6 (e) −Gate 59.9 66.0 (f) −Regularization (β = 0) 63.7 69.8 (g) Replace with Bi-BloSA 60.5 67.1 (h) + Convolutional layers 61.3 67.6 Table 3: Ablation study on SearchQA test set. “−”/“+” denotes removing/adding a model component, the indent in (e) and (h) means removing/adding a model component on the basis of the previous line. 4.4 Ablations In order to evaluate the individual contribution of each model component, we conduct an ablation study. Explicitly, we remove or replace model components and report the performance on the SearchQA test set in Table 3. In (a), we remove the cross-passage attention. In (b), we remove all self-attention, i.e., the context information is modeled by the convolutional layers only. In (c), we 2258 Question: Which vegetable is a Welsh emblem? Answer: leek Prediction: leek Question: What gemstone was reputed to heal eye ailments? Answer: emerald Prediction: pearl … A pungent vegetable is the national emblem of Wales ... … The leek (a vegetable) is a national emblem ... … The vegetable called leek is also considered to … ... the reason why the daffodil is used as an emblem is ... ... Lochcarron of Scotland has a new Welsh Emblem ... ... air force emblem ferrari prancing ... ... pearl was used therapeutically to heal eye ailments ... ... The gemstone gets its name from its resemblance to the eye of a tiger ... ... Copper is used by medical science for many ailments ... ... Iris Agate: Use to heal burns ... ... 9th December 2008 Crystal Healing ... Figure 3: Case study on the Quasar-T dev set to show which tokens are selected as important tokens by the gated token selection mechanism in DynSA block. Important tokens are shaded. remove all convolutional layers in DynSA blocks. In (d), we remove the gated token selection mechanism in DynSA blocks; in other words, which K tokens are selected is decided randomly rather than by the gate activation. Further, we remove the gate itself from (d) in (e). In (f), we remove regularization on gate activation by setting β = 0. In (g), we replace the DynSA block with Bi-BloSA (Shen et al., 2018b), which is proposed for long-sequence modeling but a block-level selfattention. The Bi-BloSA is implemented using the author’s open source code. On the basis of (g), we combine Bi-BloSA with convolutional layers in (h). As is shown in Table 3, cross-passage attention is most critical to the performance (almost 10% drop), the results prove the necessity of formation interaction between passages. Since we set K = 256, and most singular passages are within 256 tokens, the DynSA models local context for every position before the concatenation of all passages. Therefore, removing convolutional layers does not degrade the model entirely in (c). Self-attention and convolutional layers account for 4.7% and 2.9% performance drop respectively, and it illustrates that self-attention plays a more critical role than convolutional layers in modeling context information. In (d), the performance reduces significantly, proving the effectiveness of the gated token selection mechanism in the proposed architecture. Compare (e) to (d) and compare (f) to the full architecture, it is concluded that the gate itself and the regularization also have slight benefits to the model. From (g) and (h), we learn that the token-level DynSA block outperforms the blocklevel Bi-BloSA by a large margin, verifying the superiority of fine-grain representation. 4.5 Qualitative Analysis We conduct a case study to show which tokens are selected as important tokens by the gated token seLayer -1 Layer 0 Layer 1 Layer 2 Layer 3 Layer 4 0 20 40 60 80 100 Percentage (%) 62.9 18.7 97.0 99.4 98.3 93.5 2.2 16.4 0.6 0.1 0.3 1.1 34.9 64.9 2.4 0.5 1.4 5.4 0 g 1/3 1/3 < g 2/3 2/3 < g 1 (a) Layer -1 Layer 0 Layer 1 Layer 2 Layer 3 Layer 4 0 500 1000 1500 2000 2500 Count 1079 2362 145 25 72 326 (b) Figure 4: Quantitative analysis on the Quasar-T dev set. Layers are indexed from the bottom up, the DynSA blocks in the cross-passage attention layer are indexed from layer 1 to layer 4, the two DynSA blocks below the cross-passage attention layer are indexed as layer -1 and layer 0 respectively. (a) The distribution of the number of tokens of different activities in each layer. Tokens are classified into three categories according to its activity value g. (b) The average amount of active tokens (g > 0.01) in each layer. lection mechanism. In a DynSA block, we define the maximum gate activation in all heads as a token’s activity. The activity reflects the estimated importance of a token. In this subsection, all the tokens are ranked according to the sum of a token’s activities in all DynSA blocks in the crosspassage attention layer. In Figure 3, two questionanswering instances are given, and the top-ranked tokens are shaded. As we can see, the model inclines to mark cue words and plausible answers as the important tokens in DynSA blocks. We conjecture that information interactions between plausible answers may play an answer verification role, 2259 SQuAD 1.1 Speedup Memory |θ| EM/F1 Bi-LSTM 1.0x/1.0x 4305 1.3M 70.5/79.8 Full SAN 3.5x/2.5x 8748 1.9M 70.6/80.1 Bi-BloSAN 3.4x/2.3x 6414 1.9M 66.7/76.8 DynSAN 4.3x/3.3x 4341 1.9M 69.9/79.5 Table 4: The time cost and memory consumption on SQuAD. The time cost is shown through the speedup rate with respect to Bi-LSTM. Both the training speedup rate and inference speedup rate are reported. The memory usage is measured in Megabyte. |θ| denotes the amount of trainable parameters in a model. Accuracy is measured by EM and F1. while information interactions between cue words may be considered as multihop reasoning. We also observe that in a lot of mispredicted instances the correct answer never obtains large gate activations in cross-passage attention layers. Perhaps this is a reason for misprediction. 4.6 Quantitative Analysis Figure 4(a) illustrates the distribution of the number of tokens of different activities in each layer. Token’s activity is defined as in subsection 4.5. We also count the average number of active tokens on the Quasar-T dev set. We define a token is active when its activity is greater than 0.01. Figure 4(b) reports the statistics. In general, the activity values tend to be polarized, i.e., either near zero or near one. It is probably caused by the normalization in Equation 3 and the regularization term in Equation 6. Besides, the intra-passage DynSA blocks (layer -1 and layer 0) have more active tokens, while the cross-passage blocks have less. It explains that more tokens take effect in understanding a single passage, while only a few important tokens are necessary for cross-passage information interaction. The results verify our observation mentioned in section 1. 4.7 Time Cost & Memory Consumption We also conduct experiments to show the computational costs of the proposed model and other baseline models. Specifically, we replace the DynSA blocks in Figure 2 with Bi-LSTM (Hochreiter and Schmidhuber, 1997), full SAN, and Bi-BloSAN (Shen et al., 2018b) respectively. Note that the full SAN refers to the model encoder block in QANet (Yu et al., 2018), which is a combination of global multi-head self-attention and local convolution. It is a strong baseline, and we use 4 5 6 7 8 log2(K) 55 60 65 70 75 EM/F1 66.4 66.9 68.5 69.7 70.3 60.2 61.0 62.5 63.5 64.2 SearchQA F1 SearchQA EM (a) 10 20 30 40 50 #Passages 50 55 60 65 70 75 EM/F1 62.5 67.3 69.1 69.5 70.3 56.4 61.3 63.1 63.4 64.2 SearchQA F1 SearchQA EM (b) Figure 5: (a) Effects of choosing different values of the hyper-parameter K in token selection. K is the number of chosen tokens, and is set to a power of 2. (b) Performance against the number of supporting passages. it to show the situation of full self-attention over all tokens. To avoid the long running time of Bi-LSTM and the out-of-memory issue of full SAN on multipassage RC tasks, we select SQuAD 1.1 (Rajpurkar et al., 2016) as the benchmark dataset. Since SQuAD is a single-passage RC task, we consider it as special multi-passage RC when the number of passages M equals to 1. In this experiment, top K = 32 tokens are chosen in DynSAN. Models are trained on a single 12GB K80 GPU. The results are shown in Table 4. Compared with the full SAN and Bi-LSTM, DynSAN has a slight accuracy drop while Bi-BloSAN degrades significantly. In terms of time cost and memory usage, DynSAN reaches 4.3x and 3.3x speedup and has a similar memory consumption to BiLSTM. Because of the characteristics of Bi-LSTM and the full SAN, as the sequence length increases, the advantage of DynSAN in speed and memory consumption would be more significant. Although DynSAN has a small accuracy drop to the full SAN, it seems that DynSAN is a relatively balanced model concerning speed, memory, and accuracy. 4.8 Model Analysis Effect of Token Selection Figure 5(a) shows the 2260 effects of the token selection. As the number of chosen tokens increases, performance improves as expected. When the number of chosen tokens is large enough, the gain becomes marginal. The choice of this hyper-parameter has an impact on the balance in speed, memory, and accuracy. Number of Passages Figure 5(b) answers following research question “How would the performance change with respect to the number of passages?” As more supporting passages are taken into consideration, both F1 and EM performance of our model continuously increase. The results verify that those low-ranked passages play a critical role in answering the questions. 5 Related Works As far as multi-passage reading comprehension be concerned, a lot of powerful deep learning approaches have been introduced to solve this problem. De Cao et al. (2019); Song et al. (2018) introduce graph convolutional network (GCN) and graph recurrent network (GRN) into this task. Dhingra et al. (2018) use co-reference annotations extracted from an external system to connect entity mentions for multihop reasoning. Zhong et al. (2019) propose an ensemble approach for coarsegrain and fine-grain co-attention networks. Pang et al. (2019) propose a hierarchical answer spans model to tackle the problem of multiple answer spans. Clark and Gardner (2018) uses a sharednormalization objective to produce accurate perpassage confidence scores and marginalize the probability of an answer candidate over all passages. While it outperforms most single-passage RC models by a large margin, it processes each passage independently omitting the multi-passage information interaction completely. In Wang et al. (2018b), cross-passage answer verification is definitely proposed, in which all the word embeddings in a passage are summed through attention mechanism to represent an answer candidate, and then each answer candidate attends to other candidates to collect supportive information. In Dehghani et al. (2019a), multihop reasoning is implemented by a Universal Transformer (Dehghani et al., 2019b) which is mainly based on Multi-head Self-attention (Vaswani et al., 2017) and a transition function. Our work is concerned with Self-attention Network (SAN) (Vaswani et al., 2017; Shen et al., 2018a). For the first time, Vaswani et al. (2017) explore the possibilities of completely replacing the recurrent neural network with self-attention to model context dependencies. Some papers propose variants of self-attention mechanisms, such as Shen et al. (2018c); Hu et al. (2018); Shaw et al. (2018); Yang et al. (2019). Besides, Shen et al. (2018b) explore reducing the computational complexity of self-attention. 6 Conclusion In this paper, we proposed a new Dynamic Selfattention (DynSA) architecture, which dynamically determinates what tokens are important for constructing intra-passage or cross-passage tokenlevel semantic representations. The proposed approach has the advantages in remaining fine-grain semantic information meanwhile reaching a balance between time, memory and accuracy. We showed the effectiveness of the proposed method in handling multi-passage reading comprehension using three benchmark datasets including SearchQA, Quasar-T, and WikiHop. Experimental results showed state-of-the-art performance. Acknowledgments We thank the anonymous reviewers for their valuable comments, and Johannes Welbl for evaluating our model on the hidden WikiHop test dataset. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870– 1879, Vancouver, Canada. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111, Doha, Qatar. Association for Computational Linguistics. 2261 Franc¸ois Chollet. 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia. Association for Computational Linguistics. Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidynathan, Srinivas Sridharan, Dhiraj Kalamkar, Bharat Kaul, and Pradeep Dubey. 2016. Distributed deep learning using synchronous stochastic gradient descent. arXiv preprint arXiv:1602.06709. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2019. Question answering by reasoning across documents with graph convolutional networks. Conference of the North American Chapter of the Association for Computational Linguistics. Mostafa Dehghani, Hosein Azarbonyad, Jaap Kamps, and Maarten de Rijke. 2019a. Learning to transform, combine, and reason in open-domain question answering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 681–689. ACM. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2019b. Universal transformers. International Conference on Learning Representations. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 42–48, New Orleans, Louisiana. Association for Computational Linguistics. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question answering by search and reading. arXiv preprint arXiv:1707.03904. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehension. 27th International Joint Conference on Artificial Intelligence. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018). Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, and Xueqi Cheng. 2019. Has-qa: Hierarchical answer spans model for open-domain question answering. AAAI Conference on Artificial Intelligence. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. International Conference on Learning Representations. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018a. Disan: Directional self-attention network for rnn/cnn-free language understanding. In Thirty-Second AAAI Conference on Artificial Intelligence. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. 2018b. Bi-directional block selfattention for fast and memory-efficient sequence modeling. International Conference on Learning Representations. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. 2018c. Fast directional self-attention mechanism. arXiv preprint arXiv:1805.00912. Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018. Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks. arXiv preprint arXiv:1809.02040. 2262 Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018a. R 3: Reinforced ranker-reader for open-domain question answering. In Thirty-Second AAAI Conference on Artificial Intelligence. Yizhong Wang, Kai Liu, Jing Liu, Wei He, Yajuan Lyu, Hua Wu, Sujian Li, and Haifeng Wang. 2018b. Multi-passage machine reading comprehension with cross-passage answer verification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1918–1927. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 271–280, Vancouver, Canada. Association for Computational Linguistics. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association of Computational Linguistics, 6:287–302. Baosong Yang, Longyue Wang, Derek F Wong, Lidia S Chao, and Zhaopeng Tu. 2019. Convolutional selfattention network. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. International Conference on Learning Representations. Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. International Conference on Learning Representations.
2019
218
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2263–2272 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2263 Explicit Utilization of General Knowledge in Machine Reading Comprehension Chao Wang and Hui Jiang Department of Electrical Engineering and Computer Science Lassonde School of Engineering, York University 4700 Keele Street, Toronto, Ontario, Canada {chwang, hj}@eecs.yorku.ca Abstract To bridge the gap between Machine Reading Comprehension (MRC) models and human beings, which is mainly reflected in the hunger for data and the robustness to noise, in this paper, we explore how to integrate the neural networks of MRC models with the general knowledge of human beings. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset (20%–80%) of the training examples are available, KAR outperforms the state-ofthe-art MRC models by a large margin, and is still reasonably robust to noise. 1 Introduction Machine Reading Comprehension (MRC), as the name suggests, requires a machine to read a passage and answer its relevant questions. Since the answer to each question is supposed to stem from the corresponding passage, a common MRC solution is to develop a neural-network-based MRC model that predicts an answer span (i.e. the answer start position and the answer end position) from the passage of each given passage-question pair. To facilitate the explorations and innovations in this area, many MRC datasets have been established, such as SQuAD (Rajpurkar et al., 2016), MS MARCO (Nguyen et al., 2016), and TriviaQA (Joshi et al., 2017). Consequently, many pioneering MRC models have been proposed, such as BiDAF (Seo et al., 2016), R-NET (Wang et al., 2017), and QANet (Yu et al., 2018). According to the leader board of SQuAD, the state-of-the-art MRC models have achieved the same performance as human beings. However, does this imply that they have possessed the same reading comprehension ability as human beings? OF COURSE NOT. There is a huge gap between MRC models and human beings, which is mainly reflected in the hunger for data and the robustness to noise. On the one hand, developing MRC models requires a large amount of training examples (i.e. the passage-question pairs labeled with answer spans), while human beings can achieve good performance on evaluation examples (i.e. the passage-question pairs to address) without training examples. On the other hand, Jia and Liang (2017) revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passagequestion pair, but in addition to this, human beings can also utilize general knowledge. A typical category of general knowledge is inter-word semantic connections. As shown in Table 1, such general knowledge is essential to the reading comprehension ability of human beings. A promising strategy to bridge the gap mentioned above is to integrate the neural networks of MRC models with the general knowledge of human beings. To this end, it is necessary to solve two problems: extracting general knowledge from passagequestion pairs and utilizing the extracted general knowledge in the prediction of answer spans. The first problem can be solved with knowledge bases, which store general knowledge in structured forms. A broad variety of knowledge bases are available, such as WordNet (Fellbaum, 1998) storing semantic knowledge, ConceptNet (Speer et al., 2017) storing commonsense knowledge, and 2264 Passage Question Answer Teachers may use a lesson plan to facilitate student learning, providing a course of study which is called the curriculum. What can a teacher use to help students learn? lesson plan Manufacturing accounts for a significant but declining share of employment, although the city’s garment industry is showing a resurgence in Brooklyn. In what borough is the garment business prominent? Brooklyn Table 1: Two examples about the importance of inter-word semantic connections to the reading comprehension ability of human beings: in the first one, we can find the answer because we know “facilitate” is a synonym of “help”; in the second one, we can find the answer because we know “Brooklyn” is a hyponym of “borough”. Freebase (Bollacker et al., 2008) storing factoid knowledge. In this paper, we limit the scope of general knowledge to inter-word semantic connections, and thus use WordNet as our knowledge base. The existing way to solve the second problem is to encode general knowledge in vector space so that the encoding results can be used to enhance the lexical or contextual representations of words (Weissenborn et al., 2017; Mihaylov and Frank, 2018). However, this is an implicit way to utilize general knowledge, since in this way we can neither understand nor control the functioning of general knowledge. In this paper, we discard the existing implicit way and instead explore an explicit (i.e. understandable and controllable) way to utilize general knowledge. The contribution of this paper is two-fold. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset (20%–80%) of the training examples are available, KAR outperforms the stateof-the-art MRC models by a large margin, and is still reasonably robust to noise. 2 Data Enrichment Method In this section, we elaborate a WordNet-based data enrichment method, which is aimed at extracting inter-word semantic connections from each passage-question pair in our MRC dataset. The extraction is performed in a controllable manner, and the extracted results are provided as general knowledge to our MRC model. 2.1 Semantic Relation Chain WordNet is a lexical database of English, where words are organized into synsets according to their senses. A synset is a set of words expressing the same sense so that a word having multiple senses belongs to multiple synsets, with each synset corresponding to a sense. Synsets are further related to each other through semantic relations. According to the WordNet interface provided by NLTK (Bird and Loper, 2004), there are totally sixteen types of semantic relations (e.g. hypernyms, hyponyms, holonyms, meronyms, attributes, etc.). Based on synset and semantic relation, we define a new concept: semantic relation chain. A semantic relation chain is a concatenated sequence of semantic relations, which links a synset to another synset. For example, the synset “keratin.n.01” is related to the synset “feather.n.01” through the semantic relation “substance holonym”, the synset “feather.n.01” is related to the synset “bird.n.01” through the semantic relation “part holonym”, and the synset “bird.n.01” is related to the synset “parrot.n.01” through the semantic relation “hyponym”, thus “substance holonym → part holonym →hyponym” is a semantic relation chain, which links the synset “keratin.n.01” to the synset “parrot.n.01”. We name each semantic relation in a semantic relation chain as a hop, therefore the above semantic relation chain is a 3-hop chain. By the way, each single semantic relation is equivalent to a 1-hop chain. 2.2 Inter-word Semantic Connection The key problem in the data enrichment method is determining whether a word is semantically connected to another word. If so, we say that there exists an inter-word semantic connection between 2265 them. To solve this problem, we define another new concept: the extended synsets of a word. Given a word w, whose synsets are represented as a set Sw, we use another set S∗ w to represent its extended synsets, which includes all the synsets that are in Sw or that can be linked to from Sw through semantic relation chains. Theoretically, if there is no limitation on semantic relation chains, S∗ w will include all the synsets in WordNet, which is meaningless in most situations. Therefore, we use a hyper-parameter κ ∈N to represent the permitted maximum hop count of semantic relation chains. That is to say, only the chains having no more than κ hops can be used to construct S∗ w so that S∗ w becomes a function of κ: S∗ w(κ) (if κ = 0, we will have S∗ w(0) = Sw). Based on the above statements, we formulate a heuristic rule for determining inter-word semantic connections: a word w1 is semantically connected to another word w2 if and only if S∗ w1(κ) ∩Sw2 ̸= ∅. 2.3 General Knowledge Extraction Given a passage-question pair, the inter-word semantic connections that connect any word to any passage word are regarded as the general knowledge we need to extract. Considering the requirements of our MRC model, we only extract the positional information of such inter-word semantic connections. Specifically, for each word w, we extract a set Ew, which includes the positions of the passage words that w is semantically connected to (if w itself is a passage word, we will exclude its own position from Ew). We can control the amount of the extracted results by setting the hyper-parameter κ: if we set κ to 0, inter-word semantic connections will only exist between synonyms; if we increase κ, inter-word semantic connections will exist between more words. That is to say, by increasing κ within a certain range, we can usually extract more inter-word semantic connections from a passage-question pair, and thus can provide the MRC model with more general knowledge. However, due to the complexity and diversity of natural languages, only a part of the extracted results can serve as useful general knowledge, while the rest of them are useless for the prediction of answer spans, and the proportion of the useless part always rises when κ is set larger. Therefore we set κ through cross validation (i.e. according to the performance of the MRC model on the development examples). 3 Knowledge Aided Reader In this section, we elaborate our MRC model: Knowledge Aided Reader (KAR). The key components of most existing MRC models are their attention mechanisms (Bahdanau et al., 2014), which are aimed at fusing the associated representations of each given passage-question pair. These attention mechanisms generally fall into two categories: the first one, which we name as mutual attention, is aimed at fusing the question representations into the passage representations so as to obtain the question-aware passage representations; the second one, which we name as self attention, is aimed at fusing the question-aware passage representations into themselves so as to obtain the final passage representations. Although KAR is equipped with both categories, its most remarkable feature is that it explicitly uses the general knowledge extracted by the data enrichment method to assist its attention mechanisms. Therefore we separately name the attention mechanisms of KAR as knowledge aided mutual attention and knowledge aided self attention. 3.1 Task Definition Given a passage P = {p1, . . . , pn} and a relevant question Q = {q1, . . . , qm}, the task is to predict an answer span [as, ae], where 1 ≤as ≤ae ≤n, so that the resulting subsequence {pas, . . . , pae} from P is an answer to Q. 3.2 Overall Architecture As shown in Figure 1, KAR is an end-to-end MRC model consisting of five layers: Lexicon Embedding Layer. This layer maps the words to the lexicon embeddings. The lexicon embedding of each word is composed of its word embedding and character embedding. For each word, we use the pre-trained GloVe (Pennington et al., 2014) word vector as its word embedding, and obtain its character embedding with a Convolutional Neural Network (CNN) (Kim, 2014). For both the passage and the question, we pass the concatenation of the word embeddings and the character embeddings through a shared dense layer with ReLU activation, whose output dimensionality is d. Therefore we obtain the passage lexicon embeddings LP ∈Rd×n and the question lexicon embeddings LQ ∈Rd×m. Context Embedding Layer. This layer maps the lexicon embeddings to the context embeddings. 2266 Figure 1: An end-to-end MRC model: Knowledge Aided Reader (KAR) For both the passage and the question, we process the lexicon embeddings (i.e. LP for the passage and LQ for the question) with a shared bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997), whose hidden state dimensionality is 1 2d. By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the passage context embeddings CP ∈Rd×n and the question context embeddings CQ ∈Rd×m. Coarse Memory Layer. This layer maps the context embeddings to the coarse memories. First we use knowledge aided mutual attention (introduced later) to fuse CQ into CP , the outputs of which are represented as ˜G ∈Rd×n. Then we process ˜G with a BiLSTM, whose hidden state dimensionality is 1 2d. By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the coarse memories G ∈Rd×n, which are the question-aware passage representations. Refined Memory Layer. This layer maps the coarse memories to the refined memories. First we use knowledge aided self attention (introduced later) to fuse G into themselves, the outputs of which are represented as ˜H ∈Rd×n. Then we process ˜H with a BiLSTM, whose hidden state dimensionality is 1 2d. By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the refined memories H ∈Rd×n, which are the final passage representations. Answer Span Prediction Layer. This layer predicts the answer start position and the answer end position based on the above layers. First we obtain the answer start position distribution os: ti = v⊤ s tanh(Wshpi + UsrQ) ∈R os = softmax({t1, . . . , tn}) ∈Rn where vs, Ws, and Us are trainable parameters; hpi represents the refined memory of each passage word pi (i.e. the i-th column in H); rQ represents the question summary obtained by performing an attention pooling over CQ. Then we obtain the answer end position distribution oe: ti = v⊤ e tanh(Wehpi + Ue[rQ; Hos]) ∈R oe = softmax({t1, . . . , tn}) ∈Rn where ve, We, and Ue are trainable parameters; [; ] represents vector concatenation. Finally we construct an answer span prediction matrix O = uptri(oso⊤ e ) ∈Rn×n, where uptri(X) represents the upper triangular matrix of a matrix X. Therefore, for the training, we minimize −log(Oas,ae) on each training example whose labeled answer span is [as, ae]; for the inference, we separately take the row index and column index of the maximum element in O as as and ae. 3.3 Knowledge Aided Mutual Attention As a part of the coarse memory layer, knowledge aided mutual attention is aimed at fusing the ques2267 tion context embeddings CQ into the passage context embeddings CP , where the key problem is to calculate the similarity between each passage context embedding cpi (i.e. the i-th column in CP ) and each question context embedding cqj (i.e. the j-th column in CQ). To solve this problem, Seo et al. (2016) proposed a similarity function: f(cpi, cqj) = v⊤ f [cpi; cqj; cpi ⊙cqj] ∈R where vf is a trainable parameter; ⊙represents element-wise multiplication. This similarity function has also been adopted by several other works (Clark and Gardner, 2017; Yu et al., 2018). However, since context embeddings contain high-level information, we believe that introducing the preextracted general knowledge into the calculation of such similarities will make the results more reasonable. Therefore we modify the above similarity function to the following form: f∗(cpi, cqj) = v⊤ f [c∗ pi; c∗ qj; c∗ pi ⊙c∗ qj] ∈R where c∗ x represents the enhanced context embedding of a word x. We use the pre-extracted general knowledge to construct the enhanced context embeddings. Specifically, for each word w, whose context embedding is cw, to construct its enhanced context embedding c∗ w, first recall that we have extracted a set Ew, which includes the positions of the passage words that w is semantically connected to, thus by gathering the columns in CP whose indexes are given by Ew, we obtain the matching context embeddings Z ∈Rd×|Ew|. Then by constructing a cw-attended summary of Z, we obtain the matching vector c+ w (if Ew = ∅, which makes Z = {}, we will set c+ w = 0): ti = v⊤ c tanh(Wczi + Uccw) ∈R c+ w = Z softmax({t1, . . . , t|Ew|}) ∈Rd where vc, Wc, and Uc are trainable parameters; zi represents the i-th column in Z. Finally we pass the concatenation of cw and c+ w through a dense layer with ReLU activation, whose output dimensionality is d. Therefore we obtain the enhanced context embedding c∗ w ∈Rd. Based on the modified similarity function and the enhanced context embeddings, to perform knowledge aided mutual attention, first we construct a knowledge aided similarity matrix A ∈ Rn×m, where each element Ai,j = f∗(cpi, cqj). Then following Yu et al. (2018), we construct the passage-attended question summaries RQ and the question-attended passage summaries RP : RQ = CQ softmax⊤ r (A) ∈Rd×n RP = CP softmaxc(A) softmax⊤ r (A) ∈Rd×n where softmaxr represents softmax along the row dimension and softmaxc along the column dimension. Finally following Clark and Gardner (2017), we pass the concatenation of CP , RQ, CP ⊙RQ, and RP ⊙RQ through a dense layer with ReLU activation, whose output dimensionality is d. Therefore we obtain the outputs ˜G ∈Rd×n. 3.4 Knowledge Aided Self Attention As a part of the refined memory layer, knowledge aided self attention is aimed at fusing the coarse memories G into themselves. If we simply follow the self attentions of other works (Wang et al., 2017; Huang et al., 2017; Liu et al., 2017b; Clark and Gardner, 2017), then for each passage word pi, we should fuse its coarse memory gpi (i.e. the i-th column in G) with the coarse memories of all the other passage words. However, we believe that this is both unnecessary and distracting, since each passage word has nothing to do with many of the other passage words. Thus we use the preextracted general knowledge to guarantee that the fusion of coarse memories for each passage word will only involve a precise subset of the other passage words. Specifically, for each passage word pi, whose coarse memory is gpi, to perform the fusion of coarse memories, first recall that we have extracted a set Epi, which includes the positions of the other passage words that pi is semantically connected to, thus by gathering the columns in G whose indexes are given by Epi, we obtain the matching coarse memories Z ∈Rd×|Epi|. Then by constructing a gpi-attended summary of Z, we obtain the matching vector g+ pi (if Epi = ∅, which makes Z = {}, we will set g+ pi = 0): ti = v⊤ g tanh(Wgzi + Uggpi) ∈R g+ pi = Z softmax({t1, . . . , t|Epi|}) ∈Rd where vg, Wg, and Ug are trainable parameters. Finally we pass the concatenation of gpi and g+ pi through a dense layer with ReLU activation, whose output dimensionality is d. Therefore we obtain the fusion result ˜hpi ∈Rd, and further the outputs ˜H = {˜hp1, . . . , ˜hpn} ∈Rd×n. 2268 4 Related Works Attention Mechanisms. Besides those mentioned above, other interesting attention mechanisms include performing multi-round alignment to avoid the problems of attention redundancy and attention deficiency (Hu et al., 2017), and using mutual attention as a skip-connector to densely connect pairwise layers (Tay et al., 2018). Data Augmentation. It is proved that properly augmenting training examples can improve the performance of MRC models. For example, Yang et al. (2017) trained a generative model to generate questions based on unlabeled text, which substantially boosted their performance; Yu et al. (2018) trained a back-and-forth translation model to paraphrase training examples, which brought them a significant performance gain. Multi-step Reasoning. Inspired by the fact that human beings are capable of understanding complex documents by reading them over and over again, multi-step reasoning was proposed to better deal with difficult MRC tasks. For example, Shen et al. (2017) used reinforcement learning to dynamically determine the number of reasoning steps; Liu et al. (2017b) fixed the number of reasoning steps, but used stochastic dropout in the output layer to avoid step bias. Linguistic Embeddings. It is both easy and effective to incorporate linguistic embeddings into the input layer of MRC models. For example, Chen et al. (2017) and Liu et al. (2017b) used POS embeddings and NER embeddings to construct their input embeddings; Liu et al. (2017a) used structural embeddings based on parsing trees to constructed their input embeddings. Transfer Learning. Several recent breakthroughs in MRC benefit from feature-based transfer learning (McCann et al., 2017; Peters et al., 2018) and fine-tuning-based transfer learning (Radford et al., 2018; Devlin et al., 2018), which are based on certain word-level or sentence-level models pretrained on large external corpora in certain supervised or unsupervised manners. 5 Experiments 5.1 Experimental Settings MRC Dataset. The MRC dataset used in this paper is SQuAD 1.1, which contains over 100, 000 passage-question pairs and has been randomly partitioned into three parts: a training set (80%), a development set (10%), and a test set (10%). Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent (Jia and Liang, 2017), to evaluate the robustness to noise of MRC models. The passages in the adversarial sets contain misleading sentences, which are aimed at distracting MRC models. Specifically, each passage in AddSent contains several sentences that are similar to the question but not contradictory to the answer, while each passage in AddOneSent contains a human-approved random sentence that may be unrelated to the passage. Implementation Details. We tokenize the MRC dataset with spaCy 2.0.13 (Honnibal and Montani, 2017), manipulate WordNet 3.0 with NLTK 3.3, and implement KAR with TensorFlow 1.11.0 (Abadi et al., 2016). For the data enrichment method, we set the hyper-parameter κ to 3. For the dense layers and the BiLSTMs, we set the dimensionality unit d to 600. For model optimization, we apply the Adam (Kingma and Ba, 2014) optimizer with a learning rate of 0.0005 and a minibatch size of 32. For model evaluation, we use Exact Match (EM) and F1 score as evaluation metrics. To avoid overfitting, we apply dropout (Srivastava et al., 2014) to the dense layers and the BiLSTMs with a dropout rate of 0.3. To boost the performance, we apply exponential moving average with a decay rate of 0.999. 5.2 Model Comparison in both Performance and the Robustness to Noise We compare KAR with other MRC models in both performance and the robustness to noise. Specifically, we not only evaluate the performance of KAR on the development set and the test set, but also do this on the adversarial sets. As for the comparative objects, we only consider the single MRC models that rank in the top 20 on the SQuAD 1.1 leader board and have reported their performance on the adversarial sets. There are totally five such comparative objects, which can be considered as representatives of the state-of-the-art MRC models. As shown in Table 2, on the development set and the test set, the performance of KAR is on par with that of the state-of-the-art MRC models; on the adversarial sets, KAR outperforms the stateof-the-art MRC models by a large margin. That is to say, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. 2269 Single MRC model Dev set (EM / F1) Test set (EM / F1) AddSent (F1) AddOneSent (F1) FusionNet (Huang et al., 2017) 75.3 / 83.6 76.0 / 83.9 51.4 60.7 RaSoR+TR+LM (Salant and Berant, 2017) 77.0 / 84.0 77.6 / 84.2 47.0 57.0 SAN (Liu et al., 2017b) 76.2 / 84.1 76.8 / 84.4 46.6 56.5 R.M-Reader (Hu et al., 2017) 78.9 / 86.3 79.5 / 86.6 58.5 67.0 QANet (with data augmentation) (Yu et al., 2018) 75.1 / 83.8 82.5 / 89.3 45.2 55.7 KAR (ours) 76.7 / 84.9 76.1 / 83.5 60.1 72.3 Table 2: Model comparison based on SQuAD 1.1 and two of its adversarial sets: AddSent and AddOneSent. All the numbers are up to date as of October 18, 2018. Note that SQuAD 2.0 (Rajpurkar et al., 2018) is not involved in this paper, because it requires MRC models to deal with the problem of answer triggering, but this paper is aimed at improving the hunger for data and robustness to noise of MRC models. To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table 3, by increasing κ from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until κ reaches 3 and then drops down. Then we conduct an ablation study by replacing the knowledge aided attention mechanisms with the mutual attention proposed by Seo et al. (2016) and the self attention proposed by Wang et al. (2017) separately, and find that the F1 score of KAR drops by 4.2 on the development set, 7.8 on AddSent, and 9.1 on AddOneSent. Finally we find that after only one epoch of training, KAR already achieves an EM of 71.9 and an F1 score of 80.8 on the development set, which is even better than the final performance of several strong baselines, such as DCN (EM / F1: 65.4 / 75.6) (Xiong et al., 2016) and BiDAF (EM / F1: 67.7 / 77.3) (Seo et al., 2016). The above empirical findings imply that general knowledge indeed plays an effective role in KAR. To demonstrate the advantage of our explicit way to utilize general knowledge over the existing implicit way, we compare the performance of KAR with that reported by Weissenborn et al. (2017), which used an encoding-based method to utilize the general knowledge dynamically retrieved from Wikipedia and ConceptNet. Since their best model only achieved an EM of 69.5 and an F1 score of 79.7 on the development set, which is much lower than the performance of KAR, we have good reason to believe that our explicit way works better than the existing implicit way. κ Average number of interword semantic connections per word Dev set (EM / F1) 0 0.39 74.2 / 82.8 1 0.63 74.6 / 83.1 2 1.24 75.1 / 83.5 3 2.21 76.7 / 84.9 4 3.68 75.9 / 84.3 5 5.58 75.3 / 83.8 Table 3: With κ set to different values in the data enrichment method, we calculate the average number of inter-word semantic connections per word as an estimation of the amount of general knowledge, and evaluate the performance of KAR on the development set. 5.3 Model Comparison in the Hunger for Data We compare KAR with other MRC models in the hunger for data. Specifically, instead of using all the training examples, we produce several training subsets (i.e. subsets of the training examples) so as to study the relationship between the proportion of the available training examples and the performance. We produce each training subset by sampling a specific number of questions from all the questions relevant to each passage. By separately sampling 1, 2, 3, and 4 questions on each passage, we obtain four training subsets, which separately contain 20%, 40%, 60%, and 80% of the training examples. As shown in Figure 2, with KAR, SAN (re-implemented), and QANet (reimplemented without data augmentation) trained on these training subsets, we evaluate their performance on the development set, and find that KAR 2270 Figure 2: With KAR, SAN, and QANet (without data augmentation) trained on the training subsets, we evaluate their performance on the development set. Figure 3: With KAR, SAN, and QANet (without data augmentation) trained on the training subsets, we evaluate their performance on AddSent. performs much better than SAN and QANet. As shown in Figure 3 and Figure 4, with the above KAR, SAN, and QANet trained on the same training subsets, we also evaluate their performance on the adversarial sets, and still find that KAR performs much better than SAN and QANet. That is to say, when only a subset of the training examples are available, KAR outperforms the state-ofthe-art MRC models by a large margin, and is still reasonably robust to noise. 6 Analysis According to the experimental results, KAR is not only comparable in performance with the state-ofthe-art MRC models, but also superior to them in terms of both the hunger for data and the robustFigure 4: With KAR, SAN, and QANet (without data augmentation) trained on the training subsets, we evaluate their performance on AddOneSent. ness to noise. The reasons for these achievements, we believe, are as follows: • KAR is designed to utilize the pre-extracted inter-word semantic connections from the data enrichment method. Some inter-word semantic connections, especially those obtained through multi-hop semantic relation chains, are very helpful for the prediction of answer spans, but they will be too covert to capture if we simply leverage recurrent neural networks (e.g. BiLSTM) and pre-trained word vectors (e.g. GloVe). • An inter-word semantic connection extracted from a passage-question pair usually also appears in many other passage-question pairs, therefore it is very likely that the inter-word semantic connections extracted from a small amount of training examples actually cover a much larger amount of training examples. That is to say, we are actually using much more training examples for model optimization than the available ones. • Some inter-word semantic connections are distracting for the prediction of answer spans. For example, the inter-word semantic connection between “bank” and “waterside” makes no sense given the context “the bank manager is walking along the waterside”. It is the knowledge aided attention mechanisms that enable KAR to ignore such distracting inter-word semantic connections so that only the important ones are used. 2271 7 Conclusion In this paper, we innovatively integrate the neural networks of MRC models with the general knowledge of human beings. Specifically, inter-word semantic connections are first extracted from each given passage-question pair by a WordNet-based data enrichment method, and then provided as general knowledge to an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the general knowledge to assist its attention mechanisms. Experimental results show that KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. In the future, we plan to use some larger knowledge bases, such as ConceptNet and Freebase, to improve the quality and scope of the general knowledge. Acknowledgments This work is partially supported by a research donation from iFLYTEK Co., Ltd., Hefei, China, and a discovery grant from Natural Sciences and Engineering Research Council (NSERC) of Canada. References Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In OSDI, volume 16, pages 265– 283. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Steven Bird and Edward Loper. 2004. Nltk: the natural language toolkit. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 31. Association for Computational Linguistics. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. ACM. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051. Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Christiane Fellbaum. 1998. WordNet. Wiley Online Library. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2017. Reinforced mnemonic reader for machine reading comprehension. arXiv preprint arXiv:1705.02798. Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2017. Fusionnet: Fusing via fullyaware attention with application to machine comprehension. arXiv preprint arXiv:1711.07341. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Rui Liu, Junjie Hu, Wei Wei, Zi Yang, and Eric Nyberg. 2017a. Structural embedding of syntactic trees for machine comprehension. arXiv preprint arXiv:1703.00572. Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2017b. Stochastic answer networks for machine reading comprehension. arXiv preprint arXiv:1712.03556. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305. Todor Mihaylov and Anette Frank. 2018. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. arXiv preprint arXiv:1805.07858. 2272 Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Shimi Salant and Jonathan Berant. 2017. Contextualized word representations for reading comprehension. arXiv preprint arXiv:1712.03609. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1047–1055. ACM. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018. Densely connected attention propagation for reading comprehension. In Advances in Neural Information Processing Systems, pages 4906–4917. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Dirk Weissenborn, Tom´aˇs Koˇcisk`y, and Chris Dyer. 2017. Dynamic integration of background knowledge in neural nlu systems. arXiv preprint arXiv:1706.02596. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604. Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William W Cohen. 2017. Semi-supervised qa with generative domain-adaptive nets. arXiv preprint arXiv:1702.02206. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541.
2019
219
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 222–228 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 222 Domain Adaptive Inference for Neural Machine Translation Danielle Saunders† and Felix Stahlberg† and Adri`a de Gispert‡ and Bill Byrne‡† †Department of Engineering, University of Cambridge, UK ‡SDL Research, Cambridge, UK {ds636, fs439, wjb31}@cam.ac.uk, {agispert, bbyrne}@sdl.com Abstract We investigate adaptive ensemble weighting for Neural Machine Translation, addressing the case of improving performance on a new and potentially unknown domain without sacrificing performance on the original domain. We adapt sequentially across two SpanishEnglish and three English-German tasks, comparing unregularized fine-tuning, L2 and Elastic Weight Consolidation. We then report a novel scheme for adaptive NMT ensemble decoding by extending Bayesian Interpolation with source information, and show strong improvements across test domains without access to the domain label. 1 Introduction Neural Machine Translation (NMT) models are effective when trained on broad domains with large datasets, such as news translation (Bojar et al., 2017). However, test data may be drawn from a different domain, on which general models can perform poorly (Koehn and Knowles, 2017). We address the problem of adapting to one or more domains while maintaining good performance across all domains. Crucially, we assume the realistic scenario where the domain is unknown at inference time. One solution is ensembling models trained on different domains (Freitag and Al-Onaizan, 2016). This approach has two main drawbacks. Firstly, obtaining models for each domain is challenging. Training from scratch on each new domain is impractical, while continuing training on a new domain can cause catastrophic forgetting of previous tasks (French, 1999), even in an ensemble (Freitag and Al-Onaizan, 2016). Secondly, ensemble weighting requires knowledge of the test domain. We address the model training problem with regularized fine-tuning, using an L2 regularizer (Barone et al., 2017) and Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017). We finetune sequentially to translate up to three domains with the same model. We then develop an adaptive inference scheme for NMT ensembles by extending Bayesian Interpolation (BI) (Allauzen and Riley, 2011) to sequence-to-sequence models.1 This lets us calculate ensemble weights adaptively over time without needing the domain label, giving strong improvements over uniform ensembling for baseline and fine-tuned models. 1.1 Adaptive training In NMT fine-tuning, a model is first trained on a task A, typically translating a large generaldomain corpus (Luong and Manning, 2015). The optimized parameters θ∗ A are fine-tuned on task B, a new domain. Without regularization, catastrophic forgetting can occur: performance on task A degrades as parameters adjust to the new objective. A regularized objective is: L(θ) = LB(θ) + Λ X j Fj(θj −θ∗ A,j)2 (1) where LA(θ) and LB(θ) are the likelihood of tasks A and B. We compare three cases: • No-reg, where Λ = 0 • L2, where Fj = 1 for each parameter index j • EWC, where Fj = E  ∇2LA(θj)  , a sample estimate of task A Fisher information. This effectively measures the importance of θj to task A. For L2 and EWC we tune Λ on the validation sets for new and old tasks to balance forgetting against new-domain performance. 1See bayesian combination schemes at https:// github.com/ucam-smt/sgnmt 223 1.2 Adaptive decoding We extend the BI formalism to condition on a source sequence, letting us apply it to adaptive NMT ensemble weighting. We consider models pk(y|x) trained on K distinct domains, used for tasks t = 1, . . . , T. In our case a task is decoding from one domain, so T = K. We assume throughout that p(t) = 1 T , i.e. that tasks are equally likely absent any other information. A standard, fixed-weight ensemble would translate with: argmax y p(y|x) = argmax y K X k=1 Wkpk(y|x) (2) The BI formalism assumes that we have tuned sets of ensemble weights λk,t for each task. This defines a task-conditional ensemble p(y|x, t) = K X k=1 λk,t pk(y|x) (3) which can be used as a fixed weight ensemble if the task is known. However if the task t is not known, we wish to translate with: argmax y p(y|x) = argmax y T X t=1 p(t, y|x) (4) At step i, where hi is history y1:i−1: p(yi|hi, x) = T X t=1 p(t, yi|hi, x) = T X t=1 p(t|hi, x) p(yi|hi, t, x) = K X k=1 pk(yi|hi, x) T X t=1 p(t|hi, x)λk,t = K X k=1 Wk,i pk(yi|hi, x) (5) This has the form of an adaptively weighted ensemble where, by comparison with Eq. 2: Wk,i = T X t=1 p(t|hi, x)λk,t (6) In decoding, at each step i adaptation relies on a recomputed estimate of the task posterior: p(t|hi, x) = p(hi|t, x)p(t|x) PT t′=1 p(hi|t′, x)p(t′|x) (7) 1.2.1 Static decoder configurations In static decoding (Eq. 2), the weights Wk are constant for each source sentence x. BI simplifies to a uniform ensemble when λk,t = p(t|x) = 1 T . This leads to Wk,i = 1 K (see Eq. 6) as a fixed equalweight interpolation of the component models. Static decoding can also be performed with task posteriors conditioned only on the source sentence, which reflects the assumption that the history can be disregarded and that p(t|hi, x) = p(t|x). In the most straightforward case, we assume that only domain k is useful for task t: λk,t = δk(t) (1 for k = t, 0 otherwise). Model weighting simplifies to a fixed ensemble: Wk = p(k|x) (8) and decoding proceeds according to Eq. 2. We refer to this as decoding with an informative source (IS). We propose using Gt, an collection of n-gram language models trained on source language sentences from tasks t, to estimate p(t|x): p(t|x) = p(x|t)p(t) PT t′=1 p(x|t′)p(t′) = Gt(x) PT t′=1 Gt′(x) (9) In this way we use source language n-gram language models to estimate p(t = k|x) in Eq. 8 for static decoding with an informative source. 1.2.2 Adaptive decoder configurations For adaptive decoding with Bayesian Interpolation, as in Eq. 5, the model weights vary during decoding according to Eq. 6 and Eq. 7. We assume here that p(t|x) = p(t) = 1 T . This corresponds to the approach in Allauzen and Riley (2011), which considers only language model combination for speech recognition. We refer to this in experiments simply as BI. A refinement is to incorporate Eq. 9 into Eq. 7, which would be Bayesian Interpolation with an informative source (BI+IS). We now address the choice of λk,t. A simple but restrictive approach is to take λk,t = δk(t). We refer to this as identity-BI, and it embodies the assumption that only one domain is useful for each task. Alternatively, if we have validation data Vt for each task t, parameter search can be done to optimize λk,t for BLEU over Vt for each task. This is straightforward but relatively costly. 224 Figure 1: Adaptively adjusting ensemble model weights Wk,i (Eq. 6) during decoding with BI We propose a simpler approach based on the source language n-gram language models from Eq. 9. We assume that each Gt is also a language model for its corresponding domain k. With Gk,t = P x∈Vt Gk(x), we take: λk,t = Gk,t P k′ Gk′,t (10) λk,t can be interpreted as the probability that task t contains sentences x drawn from domain k as estimated over the Vt. Figure 1 demonstrates this adaptive decoding scheme when weighting a biomedical and a general (news) domain model to produce a biomedical sentence under BI. The model weights Wk,i are even until biomedical-specific vocabulary is produced, at which point the in-domain model dominates. 1.2.3 Summary We summarize our approaches to decoding in Table 1. Decoder p(t|x) λk,t Static Uniform 1 T 1 T IS Eq. 9 δk(t) Adaptive Identity-BI 1 T δk(t) BI 1 T Eq. 10 BI+IS Eq. 9 Eq. 10 Table 1: Setting task posterior p(t|x) and domain-task weight λk,t for T tasks under decoding schemes in this work. Note that IS can be combined with either Identity-BI or BI by simply adjusting p(t|hi, x) according to Eq. 7. 1.3 Related Work Approaches to NMT domain adaptation include training data selection or generation (Sennrich et al., 2016a; Wang et al., 2017; Sajjad et al., 2017) and fine-tuning output distributions (Dakwale and Monz, 2017; Khayrallah et al., 2018). Vilar (2018) regularizes parameters with an importance network, while Thompson et al. (2018) freeze subsets of the model parameters before finetuning. Both observe forgetting with the adapted model on the general domain data in the realistic scenario where the test data domain is unknown. Barone et al. (2017) fine-tune with L2 regularization to reduce forgetting. Concurrently with our work, Thompson et al. (2019) apply EWC to reduce forgetting during NMT domain adaptation. During inference, Garmash and Monz (2016) use a gating network to learn weights for a multisource NMT ensemble. Freitag and Al-Onaizan (2016) use uniform ensembles of general and noreg fine-tuned models. 2 Experiments We report on Spanish-English (es-en) and EnglishGerman (en-de). For es-en we use the Scielo corpus (Neves et al., 2016), with Health as the general domain, adapting to Biological Sciences (‘Bio’). We evaluate on the domain-labeled Health and Bio 2016 test data. The en-de general domain is the WMT18 News task (Bojar et al., 2017), with all data except ParaCrawl oversampled by 2 (Sennrich et al., 2017). We validate on newstest17 and evaluate on newstest18. We adapt first to the IWSLT 2016 TED task (Cettolo et al., 2016), and then sequentially to the APE 2017 IT task (Turchi et al., 2017). We filter training sentences for minimum three tokens and maximum 120 tokens, and remove sentence pairs with length ratios higher than 4.5:1 or lower than 1:4.5. Table 2 shows filtered training sentence counts. Each language pair uses a 32K-merge source-target BPE vocabulary trained on the general domain (Sennrich et al., 2016b). We implement in Tensor2Tensor (Vaswani et al., 2018) and use its base Transformer model (Vaswani et al., 2017) for all NMT models. At inference time we decode with beam size 4 in SGNMT (Stahlberg et al., 2017) and evaluate with case-sensitive detokenized BLEU using SacreBLEU (Post, 2018). For BI, we use 4-gram KENLM models (Heafield, 2011). 225 Language pair Domain Training sentences es-en Health 586K Bio 125K en-de News 22.1M TED 146K IT 11K Table 2: Corpora training sentence counts 2.1 Adaptive training results Training scheme Health Bio 1 Health 35.9 33.1 2 Bio 29.6 36.1 3 Health and Bio 35.8 37.2 4 1 then Bio, No-reg 30.3 36.6 5 1 then Bio, L2 35.1 37.3 6 1 then Bio, EWC 35.2 37.8 Table 3: Test BLEU for es-en adaptive training. EWC reduces forgetting compared to other fine-tuning methods, while offering the greatest improvement on the new domain. Training scheme News TED IT 1 News 37.8 25.3 35.3 2 TED 23.7 24.1 14.4 3 IT 1.6 1.8 39.6 4 News and TED 38.2 25.5 35.4 5 1 then TED, No-reg 30.6 27.0 22.1 6 1 then TED, L2 37.9 26.7 31.8 7 1 then TED, EWC 38.3 27.0 33.1 8 5 then IT, No-reg 8.0 6.9 56.3 9 6 then IT, L2 32.3 22.6 56.9 10 7 then IT, EWC 35.8 24.6 57.0 Table 4: Test BLEU for en-de adaptive training, with sequential adaptation to a third task. EWC-tuned models give the best performance on each domain. We wish to improve performance on new domains without reduced performance on the general domain, to give strong models for adaptive decoding. For es-en, the Health and Bio tasks overlap, but catastrophic forgetting still occurs under noreg (Table 3). Regularization reduces forgetting and allows further improvements on Bio over noreg fine-tuning. We find EWC outperforms the L2 approach of Barone et al. (2017) in learning the new task and in reduced forgetting. In the en-de News/TED task (Table 4), all fine-tuning schemes give similar improvements on TED. However, EWC outperforms no-reg and L2 on News, not only reducing forgetting but giving 0.5 BLEU improvement over the baseline News model. The IT task is very small: training on IT data alone results in over-fitting, with a 17 BLEU improvement under fine-tuning. However, no-reg fine-tuning rapidly forgets previous tasks. EWC reduces forgetting on two previous tasks while further improving on the target domain. 2.2 Adaptive decoding results At inference time we may not know the test data domain to match with the best adapted model, let alone optimal weights for an ensemble on that domain. Table 5 shows improvements on data without domain labelling using our adaptive decoding schemes with unadapted models trained only on one domain (models 1+2 from Table 3 and 1+2+3 from Table 4). We compare with the ‘oracle’ model trained on each domain, which we can only use if we know the test domain. Uniform ensembling under-performs all oracle models except es-en Bio, especially on general domains. Identity-BI strongly improves over uniform ensembling, and BI with λ as in Eq. 10 improves further for all but es-en Bio. BI and IS both individually outperform the oracle for all but IS-News, indicating these schemes do not simply learn to select a single model. The combined scheme of BI+IS outperforms either BI or IS individually, except in en-de IT. We speculate IT is a distinct enough domain that p(t|x) has little effect on adapted BI weights. In Table 6 we apply the best adaptive decoding scheme, BI+IS, to models fine-tuned with EWC. The es-en ensemble consists of models 1+6 from Table 3 and the en-de ensemble models 1+7+10 from Table 4. As described in Section 2.1 EWC models perform well over multiple domains, so the improvement over uniform ensembling is less striking than for unadapted models. Nevertheless adaptive decoding improves over both uniform ensembling and the oracle model in most cases. With adaptive decoding, we do not need to assume whether a uniform ensemble or a single model might perform better for some potentially unknown domain. We highlight this in Table 7 by reporting results with the ensembles of Tables 5 and 6 over concatenated test sets, to mimic the realistic scenario of unlabelled test data. We additionally include the uniform no-reg ensembling approach given in Freitag and Al-Onaizan (2016) using models 1+4 from Table 3 and 1+5+8 from Table 4. Uniform no-reg ensembling outperforms unadapted uniform ensembling, since fine-tuning gives better in-domain performance. EWC 226 Decoder configuration es-en en-de Health Bio News TED IT Oracle model 35.9 36.1 37.8 24.1 39.6 Uniform 33.1 36.4 21.9 18.4 38.9 Identity-BI 35.0 36.6 32.7 25.3 42.6 BI 35.9 36.5 38.0 26.1 44.7 IS 36.0 36.8 37.5 25.6 43.3 BI + IS 36.0 36.9 38.4 26.4 44.7 Table 5: Test BLEU for 2-model es-en and 3-model en-de unadapted model ensembling, compared to oracle unadapted model chosen if test domain is known. Uniform ensembling generally underperforms the oracle, while BI+IS outperforms the oracle. Decoder configuration es-en en-de Health Bio News TED IT Oracle model 35.9 37.8 37.8 27.0 57.0 Uniform 36.0 36.4 38.9 26.0 43.5 BI + IS 36.2 38.0 38.7 26.1 56.4 Table 6: Test BLEU for 2-model es-en and 3-model en-de model ensembling for models adapted with EWC, compared to oracle model last trained on each domain, chosen if test domain is known. BI+IS outperforms uniform ensembling and in some cases outperforms the oracle. Decoder configuration Language pair Model type Oracle model Uniform BI + IS es-en Unadapted 36.4 34.7 36.6 No-reg 36.6 34.8 EWC 37.0 36.3 37.2 en-de Unadapted 36.4 26.8 38.8 No-reg 41.7 31.8 EWC 42.1 38.6 42.0 Table 7: Total BLEU for test data concatenated across domains. Results from 2-model es-en and 3-model en-de ensembles, compared to oracle model chosen if test domain is known. No-reg uniform corresponds to the approach of Freitag and Al-Onaizan (2016). BI+IS performs similarly to strong oracles with no test domain labeling. achieves similar or better in-domain results to noreg while reducing forgetting, resulting in better uniform ensemble performance than no-reg. BI+IS decoding with single-domain trained models achieves gains over both the naive uniform approach and over oracle single-domain models. BI+IS with EWC-adapted models gives a 0.9 / 3.4 BLEU gain over the strong uniform EWC ensemble, and a 2.4 / 10.2 overall BLEU gain over the approach described in Freitag and Al-Onaizan (2016). 3 Conclusions We report on training and decoding techniques that adapt NMT to new domains while preserving performance on the original domain. We demonstrate that EWC effectively regularizes NMT finetuning, outperforming other schemes reported for NMT. We extend Bayesian Interpolation with source information and apply it to NMT decoding with unadapted and fine-tuned models, adaptively weighting ensembles to out-perform the oracle case, without relying on test domain labels. We suggest our approach, reported for domain adaptation, is broadly useful for NMT ensembling. Acknowledgments This work was supported by EPSRC grant EP/L027623/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service2 funded by EPSRC Tier-2 capital grant EP/P020259/1. Initial work by Danielle Saunders took place during an internship at SDL Research. References Cyril Allauzen and Michael Riley. 2011. Bayesian Language Model Interpolation for Mobile Speech Input. In Proceedings of the Twelfth Annual Conference of the International Speech Communication Association. 2http://www.hpc.cam.ac.uk 227 Antonio Valerio Miceli Barone, Barry Haddow, Ulrich Germann, and Rico Sennrich. 2017. Regularization techniques for fine-tuning in Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1489–1494. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, et al. 2017. Findings of the 2017 Conference on Machine Translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169–214. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, Roldano Cattoni, and Marcello Federico. 2016. The IWSLT 2016 evaluation campaign. In IWSLT 2016, International Workshop on Spoken Language Translation. Praveen Dakwale and Christof Monz. 2017. Finetuning for Neural Machine Translation with limited degradation across in-and out-of-domain data. Proceedings of the 16th Machine Translation Summit (MT-Summit 2017), pages 156–169. Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for Neural Machine Translation. CoRR, abs/1612.06897. Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128–135. Ekaterina Garmash and Christof Monz. 2016. Ensemble learning for multi-source Neural Machine Translation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1409–1418. Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187–197. Huda Khayrallah, Brian Thompson, Kevin Duh, and Philipp Koehn. 2018. Regularized Training Objective for Continued Training for Domain Adaptation in Neural Machine Translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 36–44. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 114(13):3521–3526. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for Neural Machine Translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39. Minh-Thang Luong and Christopher D Manning. 2015. Stanford Neural Machine Translation systems for spoken language domains. In Proceedings of the International Workshop on Spoken Language Translation, pages 76–79. Mariana L Neves, Antonio Jimeno-Yepes, and Aur´elie N´ev´eol. 2016. The ScieLO Corpus: a Parallel Corpus of Scientific Publications for Biomedicine. In LREC. Matt Post. 2018. A call for clarity in reporting BLEU scores. CoRR, abs/1804.08771. Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, and Stephan Vogel. 2017. Neural Machine Translation training in a multi-domain scenario. In IWSLT 2017, International Workshop on Spoken Language Translation. Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, Antonio Valerio Miceli Barone, and Philip Williams. 2017. The University of Edinburgh’s Neural MT Systems for WMT17. In Proceedings of the Second Conference on Machine Translation, pages 389– 399. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1715–1725. Felix Stahlberg, Eva Hasler, Danielle Saunders, and Bill Byrne. 2017. SGNMT–A Flexible NMT Decoding Platform for Quick Prototyping of New Models and Search Strategies. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 25–30. Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya D McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, and Philipp Koehn. 2018. Freezing subnetworks to analyze domain adaptation in Neural Machine Translation. In Proceedings of the Third Conference on Machine Translation, pages 124–132. 228 Marco Turchi, Rajen Chatterjee, and Matteo Negri. 2017. WMT17 en-de APE shared task data. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for Neural Machine Translation. CoRR, abs/1803.07416. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. David Vilar. 2018. Learning hidden unit contribution for adapting neural machine translation models. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 2, pages 500–505. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017. Sentence embedding for Neural Machine Translation domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 560–566.
2019
22
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2273–2284 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2273 Multi-Style Generative Reading Comprehension Kyosuke Nishida1, Itsumi Saito1, Kosuke Nishida1, Kazutoshi Shinoda2∗, Atsushi Otsuka1, Hisako Asano1, Junji Tomita1 1NTT Media Intelligence Laboratory, NTT Corporation 2The University of Tokyo [email protected] Abstract This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics. First, unlike most studies on RC that have focused on extracting an answer span from the provided passages, our model instead focuses on generating a summary from the question and multiple passages. This serves to cover various answer styles required for real-world applications. Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved. This also enables our model to give an answer in the target style. Experiments show that our model achieves state-of-the-art performance on the Q&A task and the Q&A + NLG task of MS MARCO 2.1 and the summary task of NarrativeQA. We observe that the transfer of the style-independent NLG capability to the target style is the key to its success. 1 Introduction Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Current mainstream studies have treated RC as a process of extracting an answer span from one passage (Rajpurkar et al., 2016, 2018) or multiple passages (Joshi et al., 2017; Yang et al., 2018), which is usually done by predicting the start and end positions of the answer (Yu et al., 2018; Devlin et al., 2018). ∗Work done during an internship at NTT. 0 1 0 1 10 weeks </s> it takes 10 weeks to get new york state tax refund . </s> Question: “how long to get nys tax refund” Generate from Voc. Copy from Question Copy from Passages Mixture weights [NLG] [Q&A] Figure 1: Visualization of how our model generates an answer on MS MARCO. Given an answer style (top: NLG, bottom: Q&A), the model controls the mixture of three distributions for generating words from a vocabulary and copying words from the question and multiple passages at each decoding step. The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Alexa. In comparison with answer span extraction, however, the natural language generation (NLG) capability for RC has been less studied. While datasets such as MS MARCO (Bajaj et al., 2018) and NarrativeQA (Kocisk´y et al., 2018) have been proposed for providing abstractive answers, the stateof-the-art methods for these datasets are based on answer span extraction (Wu et al., 2018; Hu et al., 2018). Generative models suffer from a dearth of training data to cover open-domain questions. Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as wellformed sentences, which make sense even without the context of the question and passages, and concise phrases. These capabilities complement each other, but previous studies cannot use and control different styles within a model. In this study, we propose Masque, a generative model for multi-passage RC. It achieves stateof-the-art performance on the Q&A task and the Q&A + NLG task of MS MARCO 2.1 and the summary task of NarrativeQA. The main contri2274 butions of this study are as follows. Multi-source abstractive summarization. We introduce the pointer-generator mechanism (See et al., 2017) for generating an abstractive answer from the question and multiple passages, which covers various answer styles. We extend the mechanism to a Transformer (Vaswani et al., 2017) based one that allows words to be generated from a vocabulary and to be copied from the question and passages. Multi-style learning for style control and transfer. We introduce multi-style learning that enables our model to control answer styles and improves RC for all styles involved. We also extend the pointer-generator to a conditional decoder by introducing an artificial token corresponding to each style, as in (Johnson et al., 2017). For each decoding step, it controls the mixture weights over three distributions with the given style (Figure 1). 2 Problem Formulation This paper considers the following task: PROBLEM 1. Given a question with J words xq = {xq 1, . . . , xq J}, a set of K passages, where the k-th passage is composed of L words xpk = {xpk 1 , . . . , xpk L }, and an answer style label s, an RC model outputs an answer y = {y1, . . . , yT } conditioned on the style. In short, given a 3-tuple (xq, {xpk}, s), the system predicts P(y). The training data is a set of 6-tuples: (xq, {xpk}, s, y, a, {rpk}), where a and {rpk} are optional. Here, a is 1 if the question is answerable with the provided passages and 0 otherwise, and rpk is 1 if the k-th passage is required to formulate the answer and 0 otherwise. 3 Proposed Model We propose a Multi-style Abstractive Summarization model for QUEstion answering, called Masque. Masque directly models the conditional probability p(y|xq, {xpk}, s). As shown in Figure 2, it consists of the following modules. 1. The question-passages reader (§3.1) models interactions between the question and passages. 2. The passage ranker (§3.2) finds passages relevant to the question. 3. The answer possibility classifier (§3.3) identifies answerable questions. Masked Multi-Head Attention Shared Encoder Block Shared Encoder Block Shared Encoder Block Modeling Encoder Block Modeling Encoder Block Modeling Encoder Block Multi-Head Attention Multi-Head Attention Add & Norm Feed Forward Add & Norm Highway Glove&ELMo Add & Norm Dual Attention Concat Add & Norm Highway Glove&ELMo Highway Glove&ELMo Highway Glove&ELMo Additive Attention Additive Attention Combined Attention Multi-source Pointer-Generator Passage 1 Passage K Question Style + Answer (Shiftted) 8x 3x 5x 2x Answer Possibility Answer Word Sequence Passage Ranker Classifier query query query query Decoder Reader Relevance Figure 2: Masque model architecture. 4. The answer sentence decoder (§3.4) outputs an answer sentence conditioned on the target style. Our model is based on multi-source abstractive summarization: the answer that it generates can be viewed as a summary from the question and passages. The model also learns multi-style answers together. With these two characteristics, we aim to acquire the style-independent NLG ability and transfer it to the target style. In addition, to improve natural language understanding in the reader module, our model considers RC, passage ranking, and answer possibility classification together as multi-task learning. 3.1 Question-Passages Reader The reader module is shared among multiple answer styles and the three task-specific modules. 3.1.1 Word Embedding Layer Let xq and xpk represent one-hot vectors (of size V ) for words in the question and the k-th passage. First, this layer projects each of the vectors to a dword-dimensional vector with a pretrained weight matrix W e ∈Rdword×V such as GloVe (Pennington et al., 2014). Next, it uses contextualized word representations via ELMo (Peters et al., 2018), which allows our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and con2275 textualized vectors is passed to a two-layer highway network (Srivastava et al., 2015) to fuse the two types of embeddings, as in (Seo et al., 2017). The highway network is shared by the question and passages. 3.1.2 Shared Encoder Layer This layer uses a stack of Transformer blocks, which are shared by the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a d-dimensional vector by a linear transformation. The outputs of this layer are Epk ∈Rd×L for each k-th passage, and Eq ∈Rd×J for the question. Transformer encoder block. The block consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism (Vaswani et al., 2017). Following GPT (Radford et al., 2018), the feed-forward network consists of two linear transformations with a GELU (Hendrycks and Gimpel, 2016) activation function in between. Each sub-layer is placed inside a residual block (He et al., 2016). For an input x and a given sub-layer function f, the output is LN(f(x) + x), where LN indicates the layer normalization (Ba et al., 2016). To facilitate these residual connections, all sub-layers produce a sequence of d-dimensional vectors. Note that our model does not use any position embeddings in this block because ELMo gives the positional information of the words in each sequence. 3.1.3 Dual Attention Layer This layer uses a dual attention mechanism to fuse information from the question to the passages as well as from the passages to the question. It first computes a similarity matrix U pk ∈ RL×J between the question and the k-th passage, as done in (Seo et al., 2017), where U pk lj = wa⊤[Epk l ; Eq j ; Epk l ⊙Eq j ] indicates the similarity between the l-th word of the k-th passage and the j-th question word. The wa ∈R3d are learnable parameters. The ⊙ operator denotes the Hadamard product, and the [; ] operator denotes vector concatenation across the rows. Next, the layer obtains the row and column normalized similarity matrices Apk = softmaxj(U pk⊤) and Bpk = softmaxl(U pk). It then uses DCN (Xiong et al., 2017) to obtain dual attention representations, Gq→pk ∈R5d×L and Gp→q ∈R5d×J: Gq→pk = [Epk; ¯Apk; ¯¯Apk; Epk ⊙¯Apk; Epk ⊙¯¯Apk] Gp→q = [Eq; ¯B; ¯¯B; Eq ⊙¯B; Eq ⊙¯¯B]. Here, ¯Apk = EqApk, ¯Bpk = EpkBpk, ¯¯Apk = ¯BpkApk, ¯¯Bpk = ¯ApkBpk, ¯B = maxk( ¯Bpk), and ¯¯B = maxk( ¯¯Bpk). 3.1.4 Modeling Encoder Layer This layer uses a stack of the Transformer encoder blocks for question representations and obtains Mq ∈Rd×J from Gp→q. It also uses another stack for passage representations and obtains Mpk ∈Rd×L from Gq→pk for each k-th passage. The outputs of this layer, Mq and {Mpk}, are passed on to the answer sentence decoder; the {Mpk} are also passed on to the passage ranker and the answer possibility classifier. 3.2 Passage Ranker The ranker maps the output of the modeling layer, {Mpk}, to the relevance score of each passage. It takes the output for the first word, Mpk 1 , which corresponds to the beginning-of-sentence token, to obtain the aggregate representation of each passage sequence. Given wr ∈Rd as learnable parameters, it calculates the relevance of each k-th passage to the question as βpk = sigmoid(wr⊤Mpk 1 ). 3.3 Answer Possibility Classifier The classifier maps the output of the modeling layer to a probability for the answer possibility. It also takes the output for the first word, Mpk 1 , for all passages and concatenates them. Given wc ∈RKd as learnable parameters, it calculates the answer possibility for the question as P(a) = sigmoid(wc⊤[Mp1 1 ; . . . ; MpK 1 ]). 3.4 Answer Sentence Decoder Given the outputs provided by the reader module, the decoder generates a sequence of answer words one element at a time. It is autoregressive (Graves, 2013), consuming the previously generated words as additional input at each decoding step. 2276 3.4.1 Word Embedding Layer Let y represent one-hot vectors of the words in the answer. This layer has the same components as the word embedding layer of the reader module, except that it uses a unidirectional ELMo to ensure that the predictions for position t depend only on the known outputs at positions previous to t. Artificial tokens. To be able to use multiple answer styles within a single system, our model introduces an artificial token corresponding to the style at the beginning of the answer (y1), as done in (Johnson et al., 2017; Takeno et al., 2017). At test time, the user can specify the first token to control the style. This modification does not require any changes to the model architecture. Note that introducing the token at the decoder prevents the reader module from depending on the answer style. 3.4.2 Attentional Decoder Layer This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a d-dimensional vector by a linear transformation, and the output is a sequence of d-dimensional vectors: {s1, . . . , sT }. Transformer decoder block. In addition to the encoder block, this block consists of the second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2. As in (Vaswani et al., 2017), the selfattention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over Mq and Mpall, respectively. The Mpall is the concatenated outputs of the encoder stack for the passages, Mpall = [Mp1, . . . , MpK] ∈Rd×KL. Here, the [, ] operator denotes vector concatenation across the columns. This attention for the concatenated passages produces attention weights that are comparable between passages. 3.4.3 Multi-source Pointer-Generator Our extended mechanism allows both words to be generated from a vocabulary and words to be copied from both the question and multiple passages (Figure 3). We expect that the capability of copying words will be shared among answer styles. Additive Attention Additive Attention Final distribution Mixing weights Feed- Forward Feed- Forward Context vec. Attention weights Voc. dist. Passage Representations Question Representations Decoder t-th state Attention dist. query key,Êvalue Figure 3: Multi-source pointer-generator mechanism. For each decoding step t, mixture weights λv, λq, λp for the probability of generating words from the vocabulary and copying words from the question and the passages are calculated. The three distributions are weighted and summed to obtain the final distribution. Extended vocabulary distribution. Let the extended vocabulary, Vext, be the union of the common words (a small subset of the full vocabulary, V , defined by the input-side word embedding matrix) and all words appearing in the input question and passages. P v then denotes the probability distribution of the t-th answer word, yt, over the extended vocabulary. It is defined as: P v(yt) = softmax(W 2⊤(W 1st + b1)), where the output embedding W 2 ∈Rdword×Vext is tied with the corresponding part of the input embedding (Inan et al., 2017), and W 1 ∈Rdword×d and b1 ∈Rdword are learnable parameters. P v(yt) is zero if yt is an out-of-vocabulary word for V . Copy distributions. A recent Transformerbased pointer-generator randomly chooses one of the attention-heads to form a copy distribution; that approach gave no significant improvements in text summarization (Gehrmann et al., 2018). In contrast, our model uses an additional attention layer for each copy distribution on top of the decoder stack. For the passages, the layer takes st as the query and outputs αp t ∈RKL as the attention weights and cp t ∈Rd as the context vectors: epk l = wp⊤tanh(W pmMpk l + W psst + bp), αp t = softmax([ep1; . . . ; epK]), (1) cp t = P l αp tlMpall l , where wp, bp ∈Rd and W pm, W ps ∈Rd×d are learnable parameters. For the question, our model 2277 uses another identical layer and obtains αq t ∈RJ and cq t ∈Rd. As a result, P q and P p are the copy distributions over the extended vocabulary: P q(yt) = P j:xq j=yt αq tj, P p(yt) = P l:x pk(l) l =yt αp tl, where k(l) means the passage index corresponding to the l-th word in the concatenated passages. Final distribution. The final distribution of yt is defined as a mixture of the three distributions: P(yt) = λvP v(yt) + λqP q(yt) + λpP p(yt), λv, λq, λp = softmax(W m[st; cq t; cp t ] + bm), where W m ∈R3×3d and bm ∈R3 are learnable parameters. 3.4.4 Combined Attention In order not to attend words in irrelevant passages, our model introduces a combined attention. While the original technique combined word and sentence level attentions (Hsu et al., 2018), our model combines the word and passage level attentions. The word attention, Eq. 1, is re-defined as αp tl = αp tlβpk(l) P l′ αp tl′βpk(l′) . 3.5 Loss Function We define the training loss as the sum of losses via L(θ) = Ldec + γrankLrank + γclsLcls where θ is the set of all learnable parameters, and γrank and γcls are balancing parameters. The loss of the decoder, Ldec, is the negative log likelihood of the whole target answer sentence averaged over Nable answerable examples: Ldec = − 1 Nable X (a,y)∈D a T X t log P(yt), where D is the training dataset. The losses of the passage ranker, Lrank, and the answer possibility classifier, Lcls, are the binary cross entropy between the true and predicted values averaged over all N examples: Lrank = − 1 NK X k X rpk ∈D rpk log βpk+ (1 −rpk) log(1 −βpk)  , Lcls = −1 N X a∈D a log P(a)+ (1 −a) log(1 −P(a))  . Dataset Subset Train Dev. Eval. ALL 808,731 101,093 101,092 MS MARCO ANS 503,370 55,636 – NLG 153,725 12,467 – NarrativeQA Summary 32,747 3,461 10,557 Table 1: Numbers of questions used in the experiments. 4 Experiments on MS MARCO 2.1 We evaluated our model on MS MARCO 2.1 (Bajaj et al., 2018). It is the sole dataset providing abstractive answers with multiple styles and serves as a great test bed for building open-domain QA agents with the NLG capability that can be used in smart devices. The details of our setup and output examples are in the supplementary material. 4.1 Setup Datasets. MS MARCO 2.1 provides two tasks for generative open-domain QA: the Q&A task and the Q&A + Natural Language Generation (NLG) task. Both tasks consist of questions submitted to Bing by real users, and each question refers to ten passages. The dataset also includes annotations on the relevant passages, which were selected by humans to form the final answers, and on whether there was no answer in the passages. Answer styles. We associated the two tasks with two answer styles. The NLG task requires a wellformed answer that is an abstractive summary of the question and passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers it to be more concise than in the NLG task, averaging 13.1 words, and many of the answers do not contain the context of the question. For the question “tablespoon in cup”, a reference answer in the Q&A task is “16,” while that in the NLG task is “There are 16 tablespoons in a cup.” Subsets. In addition to the ALL dataset, we prepared two subsets for ablation tests as listed in Table 1. The ANS set consisted of answerable questions, and the NLG set consisted of the answerable questions and well-formed answers, so that NLG ⊂ANS ⊂ALL. We note that multi-style learning enables our model to learn from different answer styles of data (i.e., the ANS set), and multi-task learning with the answer possibility classifier enables our model to learn from both answerable and unanswerable data (i.e., the ALL set). Training and Inference. We trained our model with mini-batches consisting of multi-style an2278 NLG Q&A Model R-L B-1 R-L B-1 BiDAFa 16.91 9.30 23.96 10.64 Deep Cascade QAb 35.14 37.35 52.01 54.64 S-Net+CES2Sc 45.04 40.62 44.96 46.36 BERT+Multi-PGNetd 47.37 45.09 48.14 52.03 Selector+CCGe 47.39 45.26 50.63 52.03 VNETf 48.37 46.75 51.63 54.37 Masque (NLG; single) 49.19 49.63 48.42 48.68 Masque (NLG; ensemble) 49.61 50.13 48.92 48.75 Masque (Q&A; single) 25.66 36.62 50.93 42.37 Masque (Q&A; ensemble) 28.53 39.87 52.20 43.77 Human Performance 63.21 53.03 53.87 48.50 Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported. swers that were randomly sampled. We used a greedy decoding algorithm and did not use any beam search or random sampling, because they did not provide any improvements. Evaluation metrics and baselines. ROUGE-L and BLEU-1 were used to evaluate the models’ RC performance, where ROUGE-L is the main metric on the official leaderboard. We used the reported scores of extractive (Seo et al., 2017; Yan et al., 2019; Wu et al., 2018), generative (Tan et al., 2018), and unpublished RC models at the submission time. In addition, to evaluate the individual contributions of our modules, we used MAP and MRR for the ranker and F1 for the classifier, where the positive class was the answerable questions. 4.2 Results Does our model achieve state-of-the-art on the two tasks with different styles? Table 2 shows the performance of our model and competing models on the leaderboard. Our ensemble model of six training runs, where each model was trained with the two answer styles, achieved state-of-theart performance on both tasks in terms of ROUGEL. In particular, for the NLG task, our single model outperformed competing models in terms of both ROUGE-L and BLEU-1. Does multi-style learning improve the NLG performance? Table 3 lists the results of an ablation test for our single model (controlled with Model Train R-L B-1 Masque (NLG style; single) ALL 69.77 65.56 w/o multi-style learning (§3.4.2) NLG 68.20 63.95 ,→w/o Transformer (§3.1.2, §3.4.2) NLG 67.13 62.96 w/o passage ranker (§3.2) NLG 68.05 63.82 w/o possibility classifier (§3.3) ANS 69.64 65.41 Masque w/ gold passage ranker ALL 78.70 78.14 Table 3: Ablation test results on the NLG dev. set. The models were trained with the subset listed in “Train”. Model Train MAP MRR Bing (initial ranking) 34.62 35.00 Masque (single) ALL 69.51 69.96 w/o answer decoder (§3.4) ALL 67.03 67.49 w/o multi-style learning (§3.4.2) NLG 65.51 65.59 w/o possibility classifier (§3.3) ANS 69.08 69.54 Table 4: Passage ranking results on the ANS dev. set. the NLG style) on the NLG dev. set1. Our model trained with both styles outperformed the model trained with the single NLG style. Multi-style learning enabled our model to improve its NLG performance by also using non-sentence answers. Does the Transformer-based pointer-generator improve the NLG performance? Table 3 shows that our model also outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN (McCann et al., 2018). Our deep decoder captured the multi-hop interaction among the question, the passages, and the answer better than a single-layer LSTM decoder could. Does joint learning with the ranker and classifier improve NLG performance? Furthermore, Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. Joint learning thus had a regularization effect on the question-passages reader. We also confirmed that the gold passage ranker, which can perfectly predict the relevance of passages, significantly improved the RC performance. Passage ranking will be a key to developing a system that can outperform humans. Does joint learning improve the passage ranking performance? Table 4 lists the passage ranking performance on the ANS dev. set2. The 1We confirmed with the organizer that the dev. results were much better than the test results, but there was no problem. 2This evaluation requires our ranker to re-rank 10 passages. It is not the same as the Passage Re-ranking task. 2279 1.0 0.9 0.8 0.7 0.6 0.5 0.4 Precision 0.0 0.2 0.4 0.6 0.8 1.0 Recall F1=0.9 0.7 0.5 0.3 0.1 Figure 4: Precision-recall curve for answer possibility classification on the ALL dev. set. 0 5 10 15 20 25 30 yesno which when who where all how what other why Prediction (Q&A) Reference (Q&A) Prediction (NLG) Reference (NLG) Length Figure 5: Lengths of answers generated by Masque broken down by the answer style and query type on the NLG dev. set. The error bars indicate standard errors. ranker shares the question-passages reader with the answer decoder, and this sharing contributed to improvements over the ranker trained without the answer decoder. Also, our ranker outperformed the initial ranking provided by Bing by a significant margin. Does our model accurately identify answerable questions? Figure 4 shows the precision-recall curve for answer possibility classification on the ALL dev. set. Our model identified the answerable questions well. The maximum F1 score was 0.7893, where the threshold of answer possibility was 0.4411. This is the first report on answer possibility classification with MS MARCO 2.1. Does our model control answer lengths with different styles? Figure 5 shows the lengths of the answers generated by our model broken down by the answer style and query type. The generated answers were relatively shorter than the reference answers, especially for the Q&A task, but well controlled with the target style for every query type. The short answers degraded our model’s BLEU scores in the Q&A task (Table 2) because of BLEU’s brevity penalty (Papineni et al., 2002). 5 Experiments on NarrativeQA Next, we evaluated our model on NarrativeQA (Kocisk´y et al., 2018). It requires understanding the underlying narrative rather than relying on shallow pattern matching. Our detailed setup and output examples are in the supplementary material. 5.1 Setup We only describe the settings specific to this experiment. Datasets. Following previous studies, we used the summary setting for the comparisons with the reported baselines, where each question refers to one summary (averaging 659 words), and there is no unanswerable questions. Our model therefore did not use the passage ranker and answer possibility classifier. Answer styles. The NarrativeQA dataset does not explicitly provide multiple answer styles. In order to evaluate the effectiveness of multi-style learning, we used the NLG subset of MS MARCO as additional training data. We associated the NarrativeQA and NLG datasets with two answer styles. The answer style of NarrativeQA (NQA) is different from that of MS MARCO (NLG) in that the answers are short (averaging 4.73 words) and contained frequently pronouns. For instance, for the question “Who is Mark Hunter?”, a reference is “He is a high school student in Phoenix.” Evaluation metrics and baselines. BLEU-1 and 4, METEOR, and ROUGE-L were used in accordance with the evaluation in the dataset paper (Kocisk´y et al., 2018). We used the reports of top-performing extractive (Seo et al., 2017; Tay et al., 2018; Hu et al., 2018) and generative (Bauer et al., 2018; Indurthi et al., 2018) models. 5.2 Results Does our model achieve state-of-the-art performance? Table 5 shows that our single model, trained with two styles and controlled with the NQA style, pushed forward the state-of-the-art by a significant margin. The evaluation scores of the model controlled with the NLG style were low because the two styles are different. Also, our model without multi-style learning (trained with only the NQA style) outperformed the baselines in terms of ROUGE-L. This indicates that our model architec2280 Model B-1 B-4 M R-L BiDAFa 33.72 15.53 15.38 36.30 DECAPROPb 42.00 23.42 23.42 40.07 MHPGM+NOICc 43.63 21.07 19.03 44.16 ConZNetd 42.76 22.49 19.24 46.67 RMR+A2De 50.4 26.5 N/A 53.3 Masque (NQA) 54.11 30.43 26.13 59.87 w/o multi-style learning 48.70 20.98 21.95 54.74 Masque (NLG) 39.14 18.11 24.62 50.09 Masque (NQA; valid.)f 52.78 28.72 25.38 58.94 Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set. ture itself is powerful for natural language understanding in RC. 6 Related Work and Discussion Transfer and multi-task learning in RC. Recent breakthroughs in transfer learning demonstrate that pre-trained language models perform well on RC with minimal modifications (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2018, 2019). In addition, our model also uses ELMo (Peters et al., 2018) for contextualized embeddings. Multi-task learning is a transfer mechanism to improve generalization performance (Caruana, 1997), and it is generally applied by sharing the hidden layers between all tasks, while keeping task-specific layers. Wang et al. (2018) and Nishida et al. (2018) reported that the sharing of the hidden layers between the multi-passage RC and passage ranking tasks was effective. Our results also showed the effectiveness of the sharing of the question-passages reader module among the RC, passage ranking, and answer possibility classification tasks. In multi-task learning without task-specific layers, Devlin et al. (2018) and Chen et al. (2017) improved RC performance by learning multiple datasets from the same extractive RC setting. McCann et al. (2018) and Yogatama et al. (2019) investigated multi-task and curriculum learning on many different NLP tasks; their results were below task-specific RC models. Our multi-style learning does not use style-specific layers; instead uses a style-conditional decoder. Generative RC. S-Net (Tan et al., 2018) used an extraction-then-synthesis mechanism for multipassage RC. The models proposed by McCann et al. (2018), Bauer et al. (2018), and Indurthi et al. (2018) used an RNN-based pointer-generator mechanism for single-passage RC. Although these mechanisms can alleviate the lack of training data, large amounts of data are still required. Our multistyle learning will be a key technique enabling learning from many RC datasets with different styles. In addition to MS MARCO and NarrativeQA, there are other datasets that provide abstractive answers. DuReader (He et al., 2018), a Chinese multi-document RC dataset, provides longer documents and answers than those of MS MARCO. DuoRC (Saha et al., 2018) and CoQA (Reddy et al., 2018) contain abstractive answers; most of the answers are short phrases. Controllable text generation. Many studies have been carried out in the framework of style transfer, which is the task of rephrasing a text so that it contains specific styles such as sentiment. Recent studies have used artificial tokens (Sennrich et al., 2016; Johnson et al., 2017), variational auto-encoders (Hu et al., 2017), or adversarial training (Fu et al., 2018; Tsvetkov et al., 2018) to separate the content and style on the encoder side. On the decoder side, conditional language modeling has been used to generate output sentences with the target style. In addition, output length control with conditional language modeling has been well studied (Kikuchi et al., 2016; Takeno et al., 2017; Fan et al., 2018). Our style-controllable RC relies on conditional language modeling in the decoder. Multi-passage RC. The simplest approach is to concatenate the passages and find the answer from the concatenation, as in (Wang et al., 2017). Earlier pipelined models found a small number of relevant passages with a TF-IDF based ranker and passed them to a neural reader (Chen et al., 2017; Clark and Gardner, 2018), while more recent models have used a neural re-ranker to more accurately select the relevant passages (Wang et al., 2018; Nishida et al., 2018). Also, non-pipelined models (including ours) consider all the provided passages and find the answer by comparing scores between passages (Tan et al., 2018; Wu et al., 2018). The most recent models make a proper trade-off between efficiency and accuracy (Yan et al., 2019; Min et al., 2018). 2281 RC with unanswerable question identification. The previous work of (Levy et al., 2017; Clark and Gardner, 2018) outputted a no-answer score depending on the probability of all answer spans. Hu et al. (2019) proposed an answer verifier to compare an answer with the question. Sun et al. (2018) jointly learned an RC model and an answer verifier. Our model introduces a classifier on top of the question-passages reader, which is not dependent on the generated answer. Abstractive summarization. Current state-ofthe-art models use the pointer-generator mechanism (See et al., 2017). In particular, content selection approaches, which decide what to summarize, have recently been used with abstractive models. Most methods select content at the sentence level (Hsu et al., 2018; Chen and Bansal, 2018) or the word level (Pasunuru and Bansal, 2018; Li et al., 2018; Gehrmann et al., 2018). Our model incorporates content selection at the passage level in the combined attention. Query-based summarization has rarely been studied because of a lack of datasets. Nema et al. (2017) proposed an attentional encoder-decoder model; however, Saha et al. (2018) reported that it performed worse than BiDAF on DuoRC. Hasselqvist et al. (2017) proposed a pointer-generator based model; however, it does not consider copying words from the question. 7 Conclusion This study sheds light on multi-style generative RC. Our proposed model, Masque, is based on multi-source abstractive summarization and learns multi-style answers together. It achieved stateof-the-art performance on the Q&A task and the Q&A + NLG task of MS MARCO 2.1 and the summary task of NarrativeQA. The key to its success is transferring the style-independent NLG capability to the target style by use of the question-passages reader and the conditional pointer-generator decoder. In particular, the capability of copying words from the question and passages can be shared among the styles, while the capability of controlling the mixture weights for the generative and copy distributions can be acquired for each style. Our future work will involve exploring the potential of our multi-style learning towards natural language understanding. References Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. Computing Research Repository (CoRR), arXiv:1607.06450. Version 1. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. MS MARCO: A human generated machine reading comprehension dataset. Computing Research Repository (CoRR), arXiv:1611.09268. Version 3. Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question answering tasks. In Empirical Methods in Natural Language Processing (EMNLP), pages 4220–4230. Richard Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Association for Computational Linguistics (ACL), pages 1870–1879. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Association for Computational Linguistics (ACL), pages 675–686. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Association for Computational Linguistics (ACL), pages 845–855. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. Computing Research Repository (CoRR), arXiv:1810.04805. Version 1. Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Workshop on Neural Machine Translation and Generation (NMT@ACL), pages 45–54. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Association for the Advancement of Artificial Intelligence (AAAI), pages 663–670. Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. 2018. Bottom-up abstractive summarization. In Empirical Methods in Natural Language Processing (EMNLP), pages 4098–4109. Alex Graves. 2013. Generating sequences with recurrent neural networks. Computing Research Repository (CoRR), arXiv:1308.0850. Version 5. Johan Hasselqvist, Niklas Helmertz, and Mikael K˚ageb¨ack. 2017. Query-based abstractive summarization using neural networks. arXiv, 1712.06100. 2282 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), pages 770–778. Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2018. DuReader: a chinese machine reading comprehension dataset from real-world applications. In Workshop on Machine Reading for Question Answering (MRQA@ACL), pages 37–46. Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. Computing Research Repository (CoRR), arXiv:1606.08415. Version 2. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Association for Computational Linguistics (ACL), pages 132–141. Minghao Hu, Yuxing Peng, Furu Wei, Zhen Huang, Dongsheng Li, Nan Yang, and Ming Zhou. 2018. Attention-guided answer distillation for machine reading comprehension. In Empirical Methods in Natural Language Processing (EMNLP), pages 2077–2086. Minghao Hu, Furu Wei, Yuxing Peng, Zhen Huang, Nan Yang, and Ming Zhou. 2019. Read + Verify: Machine reading comprehension with unanswerable questions. In Association for the Advancement of Artificial Intelligence (AAAI). Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In International Conference on Machine Learning (ICML), pages 1587– 1596. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying word vectors and word classifiers: A loss framework for language modeling. In International Conference on Learning Representations (ICLR). Sathish Reddy Indurthi, Seunghak Yu, Seohyun Back, and Heriberto Cuay´ahuitl. 2018. Cut to the chase: A context zoom-in network for reading comprehension. In Empirical Methods in Natural Language Processing (EMNLP), pages 570–575. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistic (TACL), 5:339–351. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Association for Computational Linguistics (ACL), pages 1601–1611. Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In Empirical Methods in Natural Language Processing (EMNLP), pages 1328–1338. Tom´as Kocisk´y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistic (TACL), 6:317–328. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Computational Natural Language Learning (CoNLL), pages 333–342. Chenliang Li, Weiran Xu, Si Li, and Sheng Gao. 2018. Guiding generation for abstractive text summarization based on key information guide network. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 55–60. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. Computing Research Repository (CoRR), arXiv:1806.08730. Version 1. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Association for Computational Linguistics (ACL), pages 1725–1735. Preksha Nema, Mitesh M. Khapra, Anirban Laha, and Balaraman Ravindran. 2017. Diversity driven attention model for query-based abstractive summarization. In Association for Computational Linguistics (ACL), pages 1063–1072. Kyosuke Nishida, Itsumi Saito, Atsushi Otsuka, Hisako Asano, and Junji Tomita. 2018. Retrieve-andread: Multi-task learning of information retrieval and reading comprehension. In Conference on Information and Knowledge Management (CIKM), pages 647–656. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Association for Computational Linguistics (ACL), pages 311–318. Ramakanth Pasunuru and Mohit Bansal. 2018. Multireward reinforced summarization with saliency and entailment. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 646–653. 2283 Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227– 2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Association for Computational Linguistics (ACL), pages 784–789. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP), pages 2383–2392. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. CoQA: A conversational question answering challenge. Computing Research Repository (CoRR), arXiv:1808.07042. Version 1. Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan. 2018. DuoRC: Towards complex language understanding with paraphrased reading comprehension. In Association for Computational Linguistics (ACL), pages 1683–1693. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Association for Computational Linguistics (ACL), pages 1073–1083. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 35–40. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations (ICLR). Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. Computing Research Repository (CoRR), arXiv:1505.00387. Version 2. Fu Sun, Linyang Li, Xipeng Qiu, and Yang Liu. 2018. U-Net: Machine reading comprehension with unanswerable questions. Computing Research Repository (CoRR), arXiv:1810.06638. Version 1. Shunsuke Takeno, Masaaki Nagata, and Kazuhide Yamamoto. 2017. Controlling target features in neural machine translation via prefix constraints. In Workshop on Asian Translation (WAT@IJCNLP), pages 55–63. Chuanqi Tan, Furu Wei, Nan Yang, Bowen Du, Weifeng Lv, and Ming Zhou. 2018. S-Net: From answer extraction to answer synthesis for machine reading comprehension. In Association for the Advancement of Artificial Intelligence (AAAI), pages 5940–5947. Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018. Densely connected attention propagation for reading comprehension. In Advances in Neural Information Processing Systems (NeurIPS), pages 4911–4922. Yulia Tsvetkov, Alan W. Black, Ruslan Salakhutdinov, and Shrimai Prabhumoye. 2018. Style transfer through back-translation. In Association for Computational Linguistics (ACL), pages 866–876. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS), pages 6000–6010. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3: Reinforced reader-ranker for open-domain question answering. In Association for the Advancement of Artificial Intelligence (AAAI), pages 5981–5988. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Association for Computational Linguistics (ACL), pages 189–198. Hua Wu, Haifeng Wang, Sujian Li, Wei He, Yizhong Wang, Jing Liu, Kai Liu, and Yajuan Lyu. 2018. Multi-passage machine reading comprehension with cross-passage answer verification. In Association for Computational Linguistics (ACL), pages 1918– 1927. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In International Conference on Learning Representations (ICLR). Ming Yan, Jiangnan Xia, Chen Wu, Bin Bi, Zhongzhou Zhao, Ji Zhang, Luo Si, Rui Wang, Wei Wang, and Haiqing Chen. 2019. A deep cascade model for multi-document reading comprehension. In Association for the Advancement of Artificial Intelligence (AAAI). 2284 Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Empirical Methods in Natural Language Processing (EMNLP), pages 2369–2380. Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tom´as Kocisk´y, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. Computing Research Repository (CoRR), arXiv:1901.11373. Version 1. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. QANet: Combining local convolution with global self-attention for reading comprehension. In International Conference on Learning Representations (ICLR).
2019
220
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2285–2295 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2285 Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li National University of Defense Technology, Changsha, China {huminghao09,pengyuxing,huangzhen,dsli}@nudt.edu.cn Abstract This paper considers the reading comprehension task in which multiple documents are given as input. Prior work has shown that a pipeline of retriever, reader, and reranker can improve the overall performance. However, the pipeline system is inefficient since the input is re-encoded within each module, and is unable to leverage upstream components to help downstream training. In this work, we present RE3QA, a unified question answering model that combines context retrieving, reading comprehension, and answer reranking to predict the final answer. Unlike previous pipelined approaches, RE3QA shares contextualized text representation across different components, and is carefully designed to use high-quality upstream outputs (e.g., retrieved context or candidate answers) for directly supervising downstream modules (e.g., the reader or the reranker). As a result, the whole network can be trained end-to-end to avoid the context inconsistency problem. Experiments show that our model outperforms the pipelined baseline and achieves state-ofthe-art results on two versions of TriviaQA and two variants of SQuAD. 1 Introduction Teaching machines to read and comprehend text is a long-term goal of natural language processing. Despite recent success in leveraging reading comprehension (RC) models to answer questions given a related paragraph (Wang et al., 2017; Hu et al., 2018; Yu et al., 2018), extracting answers from documents or even a large corpus of text (e.g., Wikipedia or the whole web) remains to be an open challenge. This paper considers the multidocument RC task (Joshi et al., 2017), where the system needs to, given a question, identify the answer from multiple evidence documents. Unlike single-pargraph settings (Rajpurkar et al., 2016), this task typically involves a retriever for selecting few relevant document content (Chen et al., 2017), a reader for extracting answers from the retrieved context (Clark and Gardner, 2018), and even a reranker for rescoring multiple candidate answers (Bogdanova and Foster, 2016). Previous approaches such as DS-QA (Lin et al., 2018) and R3 (Wang et al., 2018a) consist of separate retriever and reader models that are jointly trained. Wang et al. (2018d) further propose to rerank multiple candidates for verifying the final answer. Wang et al. (2018b) investigate the full retrieve-read-rerank process by constructing a pipeline system that combines an information retrieval (IR) engine, a neural reader, and two kinds of answer rerankers. Nevertheless, the pipeline system requires re-encoding inputs for each subtask, which is inefficient for large RC tasks. Moreover, as each model is trained independently, highquality upstream outputs can not benefit downstream modules. For example, as the training proceeds, a neural retriever is able to provide more relevant context than an IR engine (Htut et al., 2018). However, the reader is still trained on the initial context retrieved using IR techniques. As a result, the reader could face a context inconsistency problem once the neural retriever is used. Similar observation has been made by Wang et al. (2018c), where integrating both the reader and the reranker into a unified network is more benefical than a pipeline (see Table 1 for more details). In this paper, we propose RE3QA, a neural question answering model that conducts the full retrieve-read-rerank process for multi-document RC tasks. Unlike previous pipelined approaches that contain separate models, we integrate an early-stopped retriever, a distantly-supervised reader, and a span-level answer reranker into a unified network. Specifically, we encode segments of text with pre-trained Transformer blocks (Devlin 2286 Model Retrieve Read Rerank Architecture DS-QA (Lin et al., 2018) 3 3 7 Pipeline R3 (Wang et al., 2018a) 3 3 7 Pipeline* Extract-Select (Wang et al., 2018d) 7 3 3 Pipeline* V-Net (Wang et al., 2018c) 7 3 3 Unified Re-Ranker (Wang et al., 2018b) 3 3 3 Pipeline RE3QA 3 3 3 Unified Table 1: Comparison of RE3QA with existing approaches. Our approach performs the full retrieve-read-rerank process with a unified network instead of a pipeline of separate models. *: R3 and Extract-Select jointly train two models with reinforcement learning. et al., 2018), where earlier blocks are used to predict retrieving scores and later blocks are fed with few top-ranked segments to produce multiple candidate answers. Redundant candidates are pruned and the rest are reranked using their span representations extracted from the shared contextualized representation. The final answer is chosen according to three factors: the retrieving, reading, and reranking scores. The whole network is trained end-to-end so that the context inconsistency problem can be alleviated. Besides, we can avoid reencoding input segments by sharing contextualized representations across different components, thus achieving better efficiency. We evaluate our approach on four datasets. On TriviaQA-Wikipedia and TriviaQA-unfiltered datasets (Joshi et al., 2017), we achieve 75.2 F1 and 71.2 F1 respectively, outperforming previous best approaches. On SQuAD-document and SQuAD-open datasets, both of which are modified versions of SQuAD (Rajpurkar et al., 2016), we obtain 14.8 and 4.1 absolute gains on F1 score over prior state-of-the-art results. Moreover, our approach surpasses the pipelined baseline with faster inference speed on both TriviaQA-Wikipedia and SQuAD-document. Source code is released for future research exploration1. 2 Related Work Recently, several large datasets have been proposed to facilitate the research in document-level reading comprehension (RC) (Clark and Gardner, 2018) or even open-domain question answering (Chen et al., 2017). TriviaQA (Joshi et al., 2017) is a challenging dataset containing over 650K question-answer-document triples, in which the document are either Wikipedia articles 1https://github.com/huminghao16/RE3QA or web pages. Quasar-T (Dhingra et al., 2017) and SearchQA (Dunn et al., 2017), however, pair each question-answer pair with a set of web page snippets that are more analogous to paragraphs. Since this paper considers the multi-document RC task, we therefore choose to work on TriviaQA and two variants of SQuAD (Rajpurkar et al., 2016). To tackle this task, previous approaches typically first retrieve relevant document content and then extract answers from the retrieved context. Choi et al. (2017) construct a coarse-to-fine framework that answers the question from a retrieved document summary. Wang et al. (2018a) jointly train a ranker and a reader with reinforcement learning (Sutton and Barto, 2011). Lin et al. (2018) propose a pipeline system consisting of a paragraph selector and a paragraph reader. Yang et al. (2019) combine BERT with an IR toolkit for open-domain question answering. However, Jia and Liang (2017) show that the RC models are easily fooled by adversarial examples. By only extracting an answer without verifying it, the models may predict a wrong answer and are unable to recover from such mistakes (Hu et al., 2019). In response, Wang et al. (2018d) present an extract-then-select framework that involves candidate extraction and answer selection. Wang et al. (2018c) introduce a unified network for cross-passage answer verification. Wang et al. (2018b) explore two kinds of answer rerankers in an existing retrieve-read pipeline system. There are some other works that handle this task in different perspectives, such as using hierarchical answer span representations (Pang et al., 2019), modeling the interaction between the retriever and the reader (Das et al., 2019), and so on. Our model differs from these approaches in several ways: (a) we integrate the retriever, reader, 2287 ... Retrieve Read Rerank Pruning document Answer q T-Block x J T-Block x (I-J) scores scoree Pruning answer scorea cn q T-Block x J T-Block x (I-J) An Bn scores scoree scorea T-Block 1c x J X q Early stopped scorer 2c 2 A 2 B d 1d scorer scorer Pruning answer Sliding window Figure 1: RE3QA architecture. The input documents are pruned and splitted into multiple segments of text, which are then fed into the model2. Few top-ranked segments are retrieved and the rest are early stopped. Multiple candidate answers are proposed for each segment, which are later pruned and reranked. RE3QA has three outputs per candidate answer: the retrieving, reading, and reranking scores. The network is trained end-to-end with a multi-task objective. “T-Block” refers to pre-trained Transformer block (Devlin et al., 2018). and reranker components into a unified network instead of a pipeline of separate models, (b) we share contextualized representation across different components while pipelined approaches reencode inputs for each model, and (c) we propose an end-to-end training strategy so that the context inconsistency problem can be alleviated. A cascaded approach is recently proposed by Yan et al. (2019), which also combines several components such as the retriever and the reader while sharing several sets of parameters. Our approach is different in that we ignore the document retrieval step since a minimal context phenomenon has been observed by Min et al. (2018), and we additionally consider answer reranking. 3 RE3QA Figure 1 gives an overview of our multi-document reading comprehension approach. Formally, given a question and a set of documents, we first filter out irrelevant document content to narrow the search space (§3.1). We then split the remaining context into multiple overlapping, fixed-length text segments. Next, we encode these segments along with the question using pre-trained Transformer blocks (Devlin et al., 2018) (§3.2). To maintain efficiency, the model computes a retrieving score based on shallow contextual representations with early summarization, and only returns a few top-ranked segments (§3.3). It then continues encoding these retrieved segments and outputs multiple candidate answers under the distant supervision setting (§3.4). Finally, redundant candidates are pruned and the rest are reranked using their span representations (§3.5). The final answer is chosen according to the retrieving, reading, and reranking scores. Our model is trained end-toend3 by back-propagation (§3.6). 3.1 Document Pruning The input to our model is a question q and a set of documents D = {d1, ..., dND}. Since the documents could be retrieved by a search engine (e.g., up to 50 webpages in the unfiltered version of TriviaQA (Joshi et al., 2017)) or Wikipedia articles could contain hundreds of paragraphs, we therefore first discard irrelevant document content at paragraph level. Following Clark and Gardner (2018), we select the top-K paragraphs that have smallest TF-IDF cosine distances with each question. These paragraphs are then sorted according to their positions in the documents and concatenated to form a new pruned document d. As a result, a large amount of unrelated text can be filtered out while a high recall is guaranteed. For example, nearly 95% of context are discarded while 3Note that “end-to-end training” only involves retrieving, reading, and reranking, but not the very first pruning step. 2288 the chance of selected paragraphs containing correct answers is 84.3% in TriviaQA-unfiltered. 3.2 Segment Encoding Typically, existing approaches either read the retrieved document at the paragraph level (Clark and Gardner, 2018) or at the sentence level (Min et al., 2018). Instead, following Hewlett et al. (2017), we slide a window of length l with a stride r over the pruned document d and produce a set of text segments C = {c1, ..., cn}, where n = l Ld−l r m + 1, and Ld is the document length. Next, we encode these segments along with the question using pretrained Transformer blocks (Devlin et al., 2018), which is a highly parallel encoding scheme instead of recurrent approaches such as LSTMs. The input to the network is a sequence of tokens x = (x1, ..., xLx) with length Lx. It is obtained by concatenating the question, segment, and several delimiters as [[CLS]; q; [SEP]; c; [SEP]], where [CLS] is a classification token and [SEP] is another token for differentiating sentences. We refer to this sequence as “segment” in the rest of this paper. For each token xi in x, its input representation is the element-wise addition of word, type, and position embeddings. Then, we can obtain the input embeddings h0 2 RLx⇥Dh, where Dh is hidden size. Next, a series of I pre-trained Transformer blocks are used to project the input embeddings into a sequence of contextualized vectors as: hi = TransformerBlock(hi−1), 8i 2 [1, I] Here, we omit a detailed introduction on the block architecture and refer readers to Vaswani et al. (2017) for more details. 3.3 Early-Stopped Retriever While we find the above parallel encoding scheme very appealing, there is a crucial computational inefficiency if all segments are fully encoded. For example, the average number of segments per instance in TriviaQA-unfiltered is 20 even after pruning, while the total number of Transformer blocks is 12 or 24. Therefore, we propose to rank all segments using early-summarized hidden representations as a mechanism for efficiently retrieving few top-ranked segments. Specifically, let hJ denote the hidden states in the J-th block, where J < I. We compute a scorer 2 R2 by summarizing hJ into a fix-sized vector with a weighted self aligning layer followed by multi-layer perceptrons as: µ = softmax(wµhJ) scorer = wrtanh(Wr XLx i=1 µihJ i ) where wµ, wr, Wr are parameters to be learned. After obtaining the scores of all segments, we pass the top-N ranked segments per instance to the subsequent blocks, and discard the rest. Here, N is relatively small so that the model can focus on reading the most revelant context. To train the retrieving component, we normalize scorer and define the objective function as: LI = − X2 i=1 yr i log(softmax(scorer)i) (1) where yr is an one-hot label indicating whether current segment contains at least one exactlymatched ground truth answer text or not. 3.4 Distantly-Supervised Reader Given the retrieved segments, the reading component aims to propose multiple candidate answers per segment. This is achieved by first elementwisely projecting the final hidden states hI into two sets of scores as follows: scores = wshI , scoree = wehI where scores 2 RLx and scoree 2 RLx are the scores for the start and end positions of answer spans, and ws, we are trainable parameter vectors. Next, let ↵i and βi denote the start and end indices of candidate answer ai. We compute a reading score, si = scores ↵i + scoree βi, and then propose top-M candidates according to the descending order of the scores, yielding a set of preliminary candidate answers A = {a1, ..., aM} along with their scores S = {s1, ..., sM}. Following previous work (Clark and Gardner, 2018), we label all text spans within a segment that match the gold answer as being correct, thus yielding two label vectors ys 2 RLx and ye 2 RLx. Since there is a chance that the segment does not contain any answer string, we then label the first element in both ys and ye as 1, and set the rest as 0. Finally, we define the objective function as: LII = − XLx i=1 ys i log(softmax(scores)i) − XLx j=1 ye j log(softmax(scoree)j) (2) 2289 3.5 Answer Reranker The answer reranker aims to rerank the candidate answers proposed by the previous reader. We first introduce a span-level non-maximum suppression algorithm to prune redundant candidate spans, and then predict the reranking scores for remaining candidates using their span representations. Span-level non-maximum suppression So far, the reader has proposed multiple candidate spans. However, since there is no constraint to predict an unique span for an answer string, multiple candidates may refer to the same text. As a result, other than the first correct span, all other spans on the same text would be false positives. Figure 2 shows a qualitative example of this phenomenon. Question: In the late 60s Owen Finlay MacLaren pioneered what useful item for parents of small chldren? Answer: baby buggy Candidates: baby buggy, collapsible baby buggy, buggy, folding buggy, folding chair ... Figure 2: An example from TriviaQA shows that multiple candidate answers refer to the same text. Inspired by the non-maximum suppression (NMS) algorithm (Rosenfeld and Thurston, 1971) that is used to prune redundant bounding boxes in object detection (Ren et al., 2015), we present a span-level NMS (Algorithm 1) to alleviate the problem. Specifically, span-level NMS starts with a set of candidate answers A with scores S. After selecting the answer ai that possesses the maximum score, we remove it from the set A and add it to B. We also delete any answer aj in A that is overlapped with ai. We define that two candidates overlap with each other if they share at least one boundary position4. This process is repeated for remaining answers in A, until A is empty or the size of B reaches a maximum threshold. Candidate answer reranking Given the candidate answer ai in B, we compute a reranking score based on its span representation, where the representation is a weighted self-aligned vector bounded by the span boundary of the answer, similar to Lee et al. (2017); He et al. (2018): ⌘= softmax(w⌘hI ↵i:βi) scorea i = watanh(Wa Xβi j=↵i ⌘j−↵i+1hI j) 4We also experimented with the span-level F1 function, but found no performance improment. Algorithm 1 Span-level NMS Input: A = {ai}M i=1; S = {si}M i=1; M ⇤ A is the set of preliminary candidate answers S is the corresponding confidence scores M ⇤denotes the maximum size threshold 1: Initialize B = {} 2: while A 6= {} and size(B) < M ⇤do 3: i = arg max S 4: B = B [ {ai}; A = A −{ai}; S = S −{si} 5: for aj in A do 6: if overlap(ai, aj) then 7: A = A −{aj}; S = S −{sj} 8: return B Here, scorea 2 RM⇤, and hI ↵i:βi is a shorthand for stacking a list of vectors hI j (↵i j βi). To train the reranker, we construct two kinds of labels for each candidate ai. First, we define a hard label yhard i as the maximum exact match score between ai and ground truth answers. Second, we also utilize a soft label ysoft i , which is computed as the maximum F1 score between ai and gold answers, so that the partially correct prediction can still have a supervised signal. The above labels are annotated for each candidate in B, yielding yhard 2 RM⇤and ysoft 2 RM⇤. If there is no correct prediction in B (all elements of yhard are 0), then we replace the least confident candidate with a gold answer. Finally, we define the following reranking objective: LIII = − XM⇤ i=1 yhard i log(softmax(scorea)i) + XM⇤ i=1 ||ysoft i − scorea i PM⇤ j=1 scorea j ||2 (3) 3.6 Training and Inference Rather than separately training each component, we propose an end-to-end training strategy so that downstream components (e.g., the reader) can benefit from the high-quality upstream outputs (e.g., the retrieved segments) during training. Specifically, we take a multi-task learning approach (Caruna, 1993; Ruder, 2017), sharing the parameters of earlier blocks with a joint objective function defined as: J = LI + LII + LIII Algorithm 2 details the training process. Before each epoch, we compute scorer for all segments in the training set X. Then, we retrieve top-N segments per instance and construct a new training set ˜ X, which only contains retrieved segments. For 2290 Dataset #Ins #Doc #Seg #Tok #Tok* K N Recall TriviaQA-Wikipedia 7,993 1.8 17 10,256 2,103 14 8 94.8% TriviaQA-unfiltered 11,313 11.7 20 52,635 2,542 14 8 84.3% SQuAD-document 10,570 1 35 5,287 3,666 30 8 99.0% SQuAD-open 10,570 5 42 38,159 5,103 30 8 64.9% Table 2: Dataset statistics. ‘#Ins’ denotes the number of instances, while ‘#Doc’, ‘#Seg’, ‘#Tok’, and ‘#Tok*’ refer to the average number of documents, segments, and tokens before/after pruning, respectively. K and N are the number of retrieved paragraphs and segments. All statistics are calculated on the development set. each instance, if all of its top-ranked segments are negative examples, then we replace the least confident one with a gold segment. During each epoch, we sample two sets of mini-batch from both the X and the ˜ X, where the first batch is used to calculate LI and the other one for computing LII and LIII. Note that the contextualized vectors hI are shared across the reader and the reranker to avoid repeated computations. The batch size of X is dynamically decided so that both of X and ˜ X can be traversed with the same number of steps. During inference, we take the retrieving, reading, and reranking scores into account. We compare the scores across all segments from the same instance, and choose the final answer according to the weighted addition of these three scores. Algorithm 2 End-to-end training of RE3QA Input: X = {Xi}t i=1, where Xi = {xj i}n j=1; M⇥; k X is the dataset containing t instances Xi is i-th instance containing n segments M⇥denotes the model with parameters ⇥ k is the maximum number of epoch 1: Initialize ⇥from pre-trained parameters 2: for epoch in 1, ..., k do 3: Compute scorer for all x in X 4: Retrieve top-N segments per instance 5: Construct a new ˜ X that includes retrieved x 6: for batchX , batch ˜ X in X, ˜ X do 7: Compute LI using batchX by Eq. 1 8: Compute LII using batch ˜ X by Eq. 2 9: Reuse hI to compute LIII by Eq. 3 10: Update M⇥with graident r(J ) 4 Experimental Setup Datasets We experiment on four datasets: (a) TriviaQA-Wikipedia (Joshi et al., 2017), a dataset of 77K trivia questions where each question is paired with one or multiple Wikipedia articles. (b) TriviaQA-unfiltered is a open-domain dataset that contains 99K question-answer tuples. The evidence documents are constructed by completing a web search given the question. (c) SQuADdocument, a variant of SQuAD dataset (Rajpurkar et al., 2016) that pairs each question with a full Wikipedia article instead of a specific paragraph. (d) SQuAD-open (Chen et al., 2017) is the open domain version of SQuAD where the evidence corpus is the entire Wikipedia domain. For fair comparision to other methods, we retrieve top5 articles as our input documents. The detailed statistics of these datasets are shown in Table 2. Data preprocessing Following Clark and Gardner (2018), we merge small paragraphs into a single paragraph of up to a threshold length in TriviaQA and SQuAD-open. The threshold is set as 200 by default. We manually tune the number of retrieved paragraphs K for each dataset, and set the number of retrieved segments N as 8. Following Devlin et al. (2018), we set the window length l as 384−Lq−3 so that Lx is 384 and set the stride r as 128, where Lq is the question length. We also calculate the answer recall after document pruning, which indicates the performance upper bound. Model settings We initialize our model using two publicly available uncased versions of BERT5: BERTBASE and BERTLARGE, and refer readers to Devlin et al. (2018) for details on model sizes. We use Adam optimizer with a learning rate of 3e-5 and warmup over the first 10% steps to fine-tune the network for 2 epochs. The batch size is 32 and a dropout probability of 0.1 is used. The number of blocks J used for early-stopped retriever is 3 for base model and 6 for large model by default. The number of proposed answers M is 20, while the threshold of NMS M⇤is 5. During inference, we tune the weights for retrieving, reading, and reranking, and set them as 1.4, 1, 1.4. Evaluation metrics We use mean average precision (MAP) and top-N to evaluate the retriev5https://github.com/google-research/bert 2291 Model Full Verified EM F1 EM F1 Baseline1 40.3 45.9 44.9 50.7 M-Reader2 46.9 52.9 54.5 59.5 Re-Ranker3 50.2 55.5 58.7 63.2 DrQA4 52.6 58.2 57.4 62.6 S-Norm5 64.0 68.9 68.0 72.9 MemoReader6 64.4 69.6 70.2 75.5 Reading Twice7 64.6 69.9 72.8 77.4 SLQA8 66.6 71.4 74.8 78.7 CAPE† 67.3 72.4 75.7 79.3 RE3QABASE 68.4 72.6 76.7 79.9 RE3QALARGE 71.0 75.2 80.3 83.0 Table 3: Results on the TriviaQA-Wikipedia test set: Joshi et al. (2017)1, Hu et al. (2018)2, Wang et al. (2018b)3, Chen et al. (2017)4, Clark and Gardner (2018)5, Back et al. (2018)6, Weissenborn et al. (2017)7, and Yan et al. (2019)8. † indicates unpublished works. Model EM F1 S-Norm (Clark and Gardner, 2018) 64.08 72.37 RE3QABASE 77.90 84.81 RE3QALARGE 80.71 87.20 Table 4: Results on the SQuAD-document dev set. ing component. As for evaluating the performance of reading and reranking, we measure the exact match (EM) accuracy and F1 score calculated between the final prediction and gold answers. Baselines We construct two pipelined baselines (denoted as BERTPIPE and BERTPIPE*) to investigate the context inconsistency problem. Both systems contain exactly the same components (e.g., retriever, reader, and reranker) as ours, except that they are trained separately. For BERTPIPE, the reader is trained on the context retrieved by an IR engine. As for BERTPIPE*, the reading context is obtained using the trained neural retriever. 5 Evaluation 5.1 Main Results Table 3 summarizes the results on the test set of TriviaQA-Wikipedia dataset. As we can see, our best model achieves 71.0 EM and 75.2 F1, firmly outperforming previous methods. Besides, Joshi et al. (2017) show that the evidence documents contain answers for only 79.7% of questions in the Wikipedia domain, suggesting that we are approaching the ceiling performance of this task. Model TriviaQA-unfiltered SQuAD-open EM F1 EM F1 DrQA1 32.3 38.3 27.1 R32 47.3 53.7 29.1 37.5 DS-QA3 48.7 56.3 28.7 36.6 Re-Ranker4 50.6 57.3 MINIMAL5 34.7 42.5 Multi-Step6 51.9 61.7 31.9 39.2 S-Norm7 61.3 67.2 HAS-QA8 63.6 68.9 BERTserini9 38.6 46.1 RE3QABASE 64.1 69.8 40.1 48.4 RE3QALARGE 65.5 71.2 41.9 50.2 Table 5: Results on TriviaQA-unfiltered test set and SQuAD-open dev set: Chen et al. (2017)1, Wang et al. (2018a)2, Lin et al. (2018)3, Wang et al. (2018b)4, Min et al. (2018)5, Das et al. (2019)6, Clark and Gardner (2018)7, Pang et al. (2019)8 and Yang et al. (2019)9. Model TriviaQA-Wikipedia SQuAD-document F1 Speed F1 Speed RE3QA 72.68 4.62 84.81 3.76 BERTPIPE 71.13 2.05 83.65 1.78 BERTPIPE* 71.59 2.08 84.04 1.82 Table 6: Comparison between our approach and the pipelined method. “Speed” denotes the number of instances processed per second during inference. However, the score of 80.3 EM on the verified set implies that there is still room for improvement. We also report the performance on documentlevel SQuAD in Table 4 to assess our approach in single-document setting. We find our approach adapts well: the best model achieves 87.2 F1. Note that the BERTLARGE model has obtained 90.9 F1 on the original SQuAD dataset (single-paragraph setting), which is only 3.7% ahead of us. Finally, to validate our approach in opendomain scenarios, we run experiments on the TriviaQA-unfiltered and SQuAD-open datasets, as shown in Table 5. Again, RE3QA surpasses prior works by an evident margin: our best model achieves 71.2 F1 on TriviaQA-unfiltered, and outperforms a BERT baseline by 4 F1 on SQuADopen, indicating that our approach is effective for the challenging multi-document RC task. 5.2 Model Analysis In this section, we analyze our approach by answering the following questions6: (a) Is end-to6The BERTBASE model is used by default in this section. 2292 Figure 3: F1 score on TriviaQA-Wikipedia and SQuAD-document w.r.t different number of retrieved segments. J TriviaQA-Wikipedia SQuAD-document MAP Top-3 Top-5 F1 Speed MAP Top-3 Top-5 F1 Speed 1 67.4 81.5 87.3 69.2 5.9 39.2 47.5 66.8 54.4 5.6 2 75.3 87.4 91.1 71.7 5.1 80.3 89.4 94.0 83.4 4.7 3 77.8 88.8 91.8 72.7 4.6 88.7 94.5 96.8 84.8 3.8 4 80.0 89.2 92.1 71.6 4.2 90.2 95.0 97.2 84.3 3.0 5 80.6 89.6 92.3 71.7 3.5 91.0 95.6 97.6 84.3 2.3 Table 7: Retrieving performance with different number of blocks J used for the early-stopped retriever. end network superior to the pipeline system? (b) How does each component contribute to the performance? (c) Is early-stopped retriever sufficient for returning high-quality segments? (d) How does the reranking loss affect the answer reranker? Comparison with pipelined method First, we compare our approach with the pipelined baselines on TriviaQA-Wikipedia and SQuAD-document development sets in Table 6. Our approach outperforms BERTPIPE by 1.6/1.2 F1 on two datasets respectively, and is also 2.3/2.1 times faster during inference. Moreover, RE3QA also beats the BERTPIPE* baseline by 1.1/0.8 F1, even as the parameters of retriever and reader are trained sequentially in BERTPIPE*. The above results confirm that the end-to-end training can indeed mitigate the context inconsistency problem, perhaps due to multi-task learning and parameter sharing. Our approach can also obtain inference speedups because of the fact that it avoids re-encoding inputs by sharing contextualized representations. Ablation study To show the effect of each individual component, we plot the F1 curve with respect to different number of retrieved segments in Figure 3. We notice that all curves become stable as more text are used, implying that our approach is robust across different amounts of context. Next, to evaluate the reranker, we only consider the retrieving and reading scores, and the performance decreases by 2.8/0.8 F1 on two datasets after the reranker is removed. To ablate the retriever, we select segments based on the TFIDF distance instead. The results show that the F1 score reduces by about 3.3 and 2.5 points on two datasets after the ablation. Removing both the retriever and the reranker performs the worst, which only achieves 68.1/81.0 F1 on two datasets at peak. The above results suggest that combining retriever, reader, and reranker is crucial for achieving promising performance. Effect of early-stopped retriever We assess whether the early-stopped retriever is sufficient for the segment retrieving task. Table 7 details the retrieving and reading results with different number of blocks J being used. As we can see, the model performs worst but maintains a high speed when J is 1. As J becomes larger, the retrieving metrices such as MAP, Top-3 and Top-5 significantly increase on both datasets. On the other hand, the speed continues to decline since more computations have been done during retrieving. A J of 6 eventually leads to an out-of-memory issue on both datasets. As for the F1 score, the model 2293 Model TriviaQA-Wikipedia SQuAD-document EM F1 EM F1 RE3QA 68.51 72.68 77.90 84.81 w/o NMS 68.29 72.33 77.67 84.36 w/o yhard 67.36 71.87 77.26 84.17 w/o ysoft 67.76 72.29 77.04 84.05 Table 8: Reranking performance with different ablations. yhard and ysoft refer to the two labels used to train the reranker. achieves the best result when J reaches 3, and starts to degrade as J continues rising. We experiment with the RE3QALARGE model and observe similar results, where the best J is 6. A likely reason for this observation may be that sharing highlevel features with the retriever could disturb the reading prediction. Therefore, the above results demonstrate that an early-stopped retriever with a relatively small J is able to reach a good trade-off between efficiency and effectiveness. Effect of answer reranker Finally, we run our model under different reranking ablations and report the results in Table 8. As we can see, removing the non-maximum suppression (NMS) algorithm has a negative impact on the performance, suggesting it is necessary to prune highlyoverlapped candidate answers before reranking. Ablating the hard label leads to a drop of 0.81 and 0.64 F1 scores on two datasets respectively, while the F1 drops by 0.39 and 0.76 points after removing the soft label. This implies that the hard label has a larger impact than the soft label on the TriviaQA dataset, but vice versa on SQuAD. 6 Conclusion We present RE3QA, a unified network that answers questions from multiple documents by conducting the retrieve-read-rerank process. We design three components for each subtask and show that an end-to-end training strategy can bring in additional benefits. RE3QA outperforms the pipelined baseline with faster inference speed and achieves state-of-the-art results on four challenging reading comprehension datasets. Future work will concentrate on designing a fast neural pruner to replace the IR-based pruning component, developing better end-to-end training strategies, and adapting our approach to other datasets such as Natural Questions (Kwiatkowski et al., 2019). Acknowledgments We would like to thank Mandar Joshi for his help with TriviaQA submissions. We also thank anonymous reviewers for their thoughtful comments and helpful suggestions. This work was supported by the National Key Research and Development Program of China (2018YFB0204300). References Seohyun Back, Seunghak Yu, Sathish Reddy Indurthi, Jihie Kim, and Jaegul Choo. 2018. Memoreader: Large-scale reading comprehension through neural memory controller. In Proceedings of EMNLP. Dasha Bogdanova and Jennifer Foster. 2016. This is how we do it: Answer reranking for open-domain how questions with paragraph vectors and minimal feature engineering. In Proceedings of NAACL. Rich Caruna. 1993. Multitask learning: A knowledgebased source of inductive bias. In Proceedings of ICML. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of ACL. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of ACL. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of ACL. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retrieverreader interaction for scalable open-domain question answering. In Proceedings of ICLR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question answering by search and reading. arXiv preprint arXiv:1707.03904. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. arXiv preprint arXiv:1805.04787. 2294 Daniel Hewlett, Llion Jones, Alexandre Lacoste, et al. 2017. Accurate supervised and semi-supervised machine reading for long documents. In Proceedings of EMNLP. Phu Mon Htut, Samuel R Bowman, and Kyunghyun Cho. 2018. Training a ranking function for opendomain question answering. In Proceedings of NAACL. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehension. In Proceedings of IJCAI. Minghao Hu, Yuxing Peng, Zhen Huang, Nan Yang, Ming Zhou, et al. 2019. Read+ verify: Machine reading comprehension with unanswerable questions. In Proceedings of AAAI. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of EMNLP. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of ACL. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. TACL. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. arXiv preprint arXiv:1707.07045. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of ACL. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of ACL. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, and Xueqi Cheng. 2019. Has-qa: Hierarchical answer spans model for open-domain question answering. In Proceedings of AAAI. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of EMNLP. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of NIPS. Azriel Rosenfeld and Mark Thurston. 1971. Edge and curve detection for visual scene analysis. IEEE Transactions on computers, (5):562–569. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098. Richard S Sutton and Andrew G Barto. 2011. Reinforcement learning: An introduction. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018a. R3: Reinforced ranker-reader for open-domain question answering. In Proceedings of AAAI. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2018b. Evidence aggregation for answer re-ranking in open-domain question answering. In Proceedings of ICLR. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of ACL. Yizhong Wang, Kai Liu, Jing Liu, Wei He, Yajuan Lyu, Hua Wu, Sujian Li, and Haifeng Wang. 2018c. Multi-passage machine reading comprehension with cross-passage answer verification. In Proceedings of ACL. Zhen Wang, Jiachen Liu, Xinyan Xiao, Yajuan Lyu, and Tian Wu. 2018d. Joint training of candidate extraction and answer selection for reading comprehension. In Proceedings of ACL. Dirk Weissenborn, Tom´aˇs Koˇcisk`y, and Chris Dyer. 2017. Dynamic integration of background knowledge in neural nlu systems. arXiv preprint arXiv:1706.02596. Ming Yan, Jiangnan Xia, Chen Wu, Bin Bi, Zhongzhou Zhao, Ji Zhang, Luo Si, Rui Wang, Wei Wang, and Haiqing Chen. 2019. A deep cascade model for multi-document reading comprehension. In Proceedings of AAAI. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In Proceedings of ICLR. 2295 Question: Which organisation was founded in Ontario, Canada in 1897 by Adelaide Hoodless? Scores Candidate Answers: Retrieving Reading Reranking [1] Women’s Institute 0.517 11.226 2.093 [2] Young Women’s Christian Association 0.231 11.263 2.299 [3] Federated Women’s Institutes of Canada 0.426 11.267 1.742 [4] Victorian Order of Nurses 0.360 11.139 1.837 [5] National Council of Women 0.291 8.966 1.02 .. .. .. .. . .. . Table 9: A sampled case (ID: sfq 21220) from the TriviaQA-Wikipedia dev set shows that although candidate [2] and candidate [3] get higher reranking and reading scores, the candidate [1] is preferred by the retrieving component and is therefore chosen as the final answer. The ground truth answer is “Women’s Institute”. Question: Hong Kong is one of two ‘special administrative regions’ of China; what is the other? Scores Candidate Answers: Retrieving Reading Reranking [1] Macau 0.195 11.067 2.502 [2] Kowloon 0.346 11.175 1.795 [3] Kowloon, and the new territories 0.346 7.941 0 [4] Macau, China 0.323 7.812 0 [5] Taiwan 0.224 5.926 0.028 .. .. .. . . .. . . Table 10: A sampled case (ID: sfq 10640) from the TriviaQA-Wikipedia dev set shows that although the candidate [2] gets higher retrieving and reading scores, the candidate [1] is chosen as the final answer since it has the highest reranking score. The ground truth answer is “Macau”. A Case Study To demonstrate how each component takes effect when predicting the final answer, we conduct some qualitative case studies sampled from the RE3QALARGE model on the TriviaQA-Wikipedia development set. For each question, we list top5 candidate answers along with their retrieving, reading, and reranking scores. As shown in Table 9, we first notice that the topranked predictions have highly-relevant semantics and share the same linguistic pattern. As a result, the top-4 candidates contain very similar reading scores from 11.1 to 11.3, which matches the observations of Clark and Gardner (2018). A likely reason of this phenomenon is that reading comprehension models are easily fooled by confusing distractors (also referred as adversarial examples mentioned by Jia and Liang (2017)). Under such circumstance, it is crucial to perform additional answer verifications to identify the final answer. In this example, we can see that the retriever becomes the key factor when the reader and reranker are distracted by confusing candidates (e.g., the second and third predictions). By taking the weighted sum of the three scores, our model eventually predicts the correct answer since the first prediction has the largest retrieving score. Similar observations can be made in Table 10. On the one hand, despite the confusing candidate “Kowloon” has the highest retrieving and reading scores, the reranker assigns a larger confidence on the candidate “Macau”. As a result, “Macau” is chosen as the final answer. On the other hand, we find that the reranking scores of some candidates (e.g., the third and fourth predictions) are zero. This is due to the span-level non-maximum suppression algorithm, where redundant spans such as “Macau, China” will be pruned before the reranking step. Therefore, the final weighted-sum scores of these candidates will be significantly lower than the top predictions, which is beneficial for filtering distractors out.
2019
221
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2296–2309 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2296 Multi-Hop Paragraph Retrieval for Open-Domain Question Answering Yair Feldman and Ran El-Yaniv Department of Computer Science Technion – Israel Institute of Technology Haifa, Israel {yairf11, rani}@cs.technion.ac.il Abstract This paper is concerned with the task of multi-hop open-domain Question Answering (QA). This task is particularly challenging since it requires the simultaneous performance of textual reasoning and efficient searching. We present a method for retrieving multiple supporting paragraphs, nested amidst a large knowledge base, which contain the necessary evidence to answer a given question. Our method iteratively retrieves supporting paragraphs by forming a joint vector representation of both a question and a paragraph. The retrieval is performed by considering contextualized sentence-level representations of the paragraphs in the knowledge source. Our method achieves state-of-the-art performance over two well-known datasets, SQuAD-Open and HotpotQA, which serve as our single- and multi-hop open-domain QA benchmarks, respectively. 1 1 Introduction Textual Question Answering (QA) is the task of answering natural language questions given a set of contexts from which the answers to these questions can be inferred. This task, which falls under the domain of natural language understanding, has been attracting massive interest due to extremely promising results that were achieved using deep learning techniques. These results were made possible by the recent creation of a variety of large-scale QA datasets, such as TriviaQA (Joshi et al., 2017) and SQuAD (Rajpurkar et al., 2016). The latest state-of-the-art methods are even capable of outperforming humans on certain tasks (Devlin et al., 2018)2. The basic and arguably the most popular task of QA is often referred to as Reading Comprehension 1Code is available at https://github.com/yairf11/MUPPET 2https://rajpurkar.github.io/SQuAD-explorer/ (RC), in which each question is paired with a relatively small number of paragraphs (or documents) from which the answer can potentially be inferred. The objective in RC is to extract the correct answer from the given contexts or, in some cases, deem the question unanswerable (Rajpurkar et al., 2018). Most large-scale RC datasets, however, are built in such a way that the answer can be inferred using a single paragraph or document. This kind of reasoning is termed single-hop reasoning, since it requires reasoning over a single piece of evidence. A more challenging task, called multi-hop reasoning, is one that requires combining evidence from multiple sources (Talmor and Berant, 2018; Welbl et al., 2018; Yang et al., 2018). Figure 1 provides an example of a question requiring multihop reasoning. To answer the question, one must first infer from the first context that Alex Ferguson is the manager in question, and only then can the answer to the question be inferred with any confidence from the second context. Another setting for QA is open-domain QA, in which questions are given without any accompanying contexts, and one is required to locate the relevant contexts to the questions from a large knowledge source (e.g., Wikipedia), and then extract the correct answer using an RC component. This task has recently been resurged following the work of Chen et al. (2017), who used a TFIDF based retriever to find potentially relevant documents, followed by a neural RC component that extracted the most probable answer from the retrieved documents. While this methodology performs reasonably well for questions requiring single-hop reasoning, its performance decreases significantly when used for open-domain multihop reasoning. We propose a new approach to accomplishing this task, called iterative multi-hop retrieval, in which one iteratively retrieves the necessary evi2297 Question: The football manager who recruited David Beckham managed Manchester United during what timeframe? Context 1: The 1995–96 season was Manchester United’s fourth season in the Premier League ... Their triumph was made all the more remarkable by the fact that Alex Ferguson ... had drafted in young players like Nicky Butt, David Beckham, Paul Scholes and the Neville brothers, Gary and Phil. Context 2: Sir Alexander Chapman Ferguson, CBE (born 31 December 1941) is a Scottish former football manager and player who managed Manchester United from 1986 to 2013. He is regarded by many players, managers and analysts to be one of the greatest and most successful managers of all time. Figure 1: An example of a question and its answer contexts from the HotpotQA dataset requiring multihop reasoning and retrieval. The first reasoning hop is highlighted in green, the second hop in purple, and the entity connecting the two is highlighted in blue bold italics. In the first reasoning hop, one has to infer that the manager in question is Alex Ferguson. Without this knowledge, the second context cannot possibly be retrieved with confidence, as the question could refer to any of the club’s managers throughout its history. Therefore, an iterative retrieval is needed in order to correctly retrieve this context pair. dence to answer a question. We believe this iterative framework is essential for answering multihop questions, due to the nature of their reasoning requirements. Our main contributions are the following: • We propose a novel multi-hop retrieval approach, which we believe is imperative for truly solving the open-domain multi-hop QA task. • We show the effectiveness of our approach, which achieves state-of-the-art results in both single- and multi-hop open-domain QA benchmarks. • We also propose using sentence-level representations for retrieval, and show the possible benefits of this approach over paragraph-level representations. While there are several works that discuss solutions for multi-hop reasoning (Dhingra et al., 2018; Zhong et al., 2019), to the best of our knowledge, this work is the first to propose a viable solution for open-domain multi-hop QA. 2 Task Definition We define the open-domain QA task by a triplet (KS, Q, A) where KS = {P1, P2, . . . , P|KS|} is a background knowledge source and Pi = (p1, p2, . . . , pli) is a textual paragraph consisting of li tokens, Q = (q1, q2, . . . , qm) is a textual question consisting of m tokens, and A = (a1, a2, . . . , an) is a textual answer consisting of n tokens, typically a span of tokens pj1, . . . , pjn in some Pi ∈KS, or optionally a choice from a predefined set of possible answers. The objective of this task is to find the answer A to the question Q using the background knowledge source KS. Formally speaking, our task is to learn a function φ such that A = φ(Q, KS). Single-Hop Retrieval In the classic and most simple form of QA, questions are formulated in such a way that the evidence required to answer them may be contained in a single paragraph, or even in a single sentence. Thus, in the opendomain setting, it might be sufficient to retrieve a single relevant paragraph Pi ∈KS using the information present in the given question Q, and have a reading comprehension model extract the answer A from Pi. We call this task variation single-hop retrieval. Multi-Hop Retrieval In contrast to the singlehop case, there are types of questions whose answers can only be inferred by using at least two different paragraphs. The ability to reason with information taken from more than one paragraph is known in the literature as multi-hop reasoning (Welbl et al., 2018). In multi-hop reasoning, not only might the evidence be spread across multiple paragraphs, but it is often necessary to first read a subset of these paragraphs in order to extract the useful information from the other paragraphs, which might otherwise be understood as not completely relevant to the question. This situation becomes even more difficult in the opendomain setting, where one must first find an initial evidence paragraph in order to be able to retrieve the rest. This is demonstrated in Figure 1, where one can observe that the second context alone may appear to be irrelevant to the question at hand and the information in the first context is necessary to retrieve the second part of the evidence correctly. We extend the multi-hop reasoning ability to the open-domain setting, referring to it as multi-hop retrieval, in which the evidence paragraphs are re2298 trieved in an iterative fashion. We focus on this task and limit ourselves to the case where two iterations of retrieval are necessary and sufficient. 3 Methodology Our solution, which we call MUPPET (multi-hop paragraph retrieval), relies on the following basic scheme consisting of two main components: (a) a paragraph and question encoder, and (b) a paragraph reader. The encoder is trained to encode paragraphs into d-dimensional vectors, and to encode questions into search vectors in the same vector space. Then, a maximum inner product search (MIPS) algorithm is applied to find the most similar paragraphs to a given question. Several algorithms exist for fast (and possibly approximate) MIPS, such as the one proposed by Johnson et al. (2017). The most similar paragraphs are then passed to the paragraph reader, which, in turn, extracts the most probable answer to the question. It is critical that the paragraph encodings do not depend on the questions. This enables storing precomputed paragraph encodings and executing efficient MIPS when given a new search vector. Without this property, any new question would require the processing of the complete knowledge source (or a significant part of it). To support multi-hop retrieval, we propose the following extension to the basic scheme. Given a question Q, we first obtain its encoding q ∈Rd using the encoder. Then, we transform it into a search vector qs ∈Rd, which is used to retrieve the top-k relevant paragraphs {P Q 1 , P Q 2 , . . . , P Q k } ⊂KS using MIPS. In each subsequent retrieval iteration, we use the paragraphs retrieved in its previous iteration to reformulate the search vector. This produces k new search vectors, {˜qs 1, ˜qs 2, . . . , ˜qs k}, where ˜qs i ∈Rd, which are used in the same manner as in the first iteration to retrieve the next top-k paragraphs, again using MIPS. This method can be seen as performing a beam search of width k in the encoded paragraphs’ space. A high-level view of the described solution is given in Figure 2. 3.1 Paragraph and Question Encoder We define f, our encoder model, in the following way. Given a paragraph P consisting of k sentences (s1, s2, . . . , sk) and m tokens (t1, t2, . . . , tm), such that si = (ti1, ti2, . . . , til), where l is the length of the sentence, our encoder generates k respective d-dimensional encodings (s1, s2, . . . , sk) = f(P), one for each sentence. This is in contrast to previous work in paragraph retrieval in which only a single fixed-size representation is used for each paragraph (Lee et al., 2018; Das et al., 2019). The encodings are created by passing (t1, t2, . . . , tm) through the following layers. Word Embedding We use the same embedding layer as the one suggested by Clark and Gardner (2018). Each token t is embedded into a vector t using both character-level and word-level information. The word-level embedding tw is obtained via pretrained word embeddings. The characterlevel embedding of a token t with lt characters (tc 1, tc 2, . . . , tc lt) is obtained in the following manner: each character tc i is embedded into a fixedsize vector tc i. We then pass each token’s character embeddings through a one-dimensional convolutional neural network, followed by max-pooling over the filter dimension. This produces a fixedsize character-level representation for each token, tc = max CNN(tc 1, tc 2, . . . , tc lt)  . Finally, we concatenate the word-level and character-level embeddings to form the final word representation, t = [tw; tc]. Recurrent Layer After obtaining the word representations, we use a bidirectional GRU (Cho et al., 2014) to process the paragraph and obtain the contextualized word representations, (c1, c2, . . . , cm) = BiGRU(t1, t2, . . . , tm). Sentence-wise max-pooling Finally, we chunk the contextualized representations of the paragraph tokens into their corresponding sentence groups, and apply max-pooling over the time dimension of each sentence group to obtain the parargaph’s d-dimensional sentence representations, si = max(ci1, ci2, . . . , cil). A high-level outline of the sentence encoder is shown is Figure 3a, where we can see a series of m tokens being passed through the aforementioned layers, producing k sentence representations. The encoding q of a question Q is computed similarly, such that q = f(Q). Note that we produce a single vector for any given question, thus the max-pooling operation is applied over all question words at once, disregarding sentence information. 2299 Figure 2: A high-level overview of our solution, MUPPET. (a) Sentence Encoder (b) Reformulation Component Figure 3: Architecture of the main components of our paragraph and question encoder. (a) Our sentence encoder architecture. The model receives a series of tokens as input and produces a sequence of sentence representations. (b) Our reformulation component architecture. This layer receives contextualized representations of a question and a paragraph, and produces a reformulated representation of the question. Reformulation Component The reformulation component receives a paragraph P and a question Q, and produces a single vector ˜q. First, contextualized word representations are obtained using the same embedding and recurrent layers used for the initial encoding, (cq 1, cq 2, . . . , cq nq) for Q and (cp 1, cp 2, . . . , cp np) for P. We then pass the contextualized representations through a bidirectional attention layer, which we adopt from Clark and Gardner (2018). The attention between question word i and paragraph word j is computed as: aij = wa 1 · cq i + wa 2 · cp j + wa 3 · (cq i ⊙cp j), Context: One of the most famous people born in Warsaw was Maria Skłodowska-Curie, who achieved international recognition for her research on radioactivity and was the first female recipient of the Nobel Prize. Famous musicians include Władysław Szpilman and Fr´ed´eric Chopin. Though Chopin was born in the village of ˙Zelazowa Wola, about 60 km (37 mi) from Warsaw, he moved to the city with his family when he was seven months old. Casimir Pulaski, a Polish general and hero of the American Revolutionary War, was born here in 1745. Question 1: What was Maria Curie the first female recipient of? Question 2: How old was Chopin when he moved to Warsaw with his family? Figure 4: An example from the SQuAD dataset of a paragraph that acts as the context for two different questions. Question 1 and its evidence (highlighted in purple) have little relation to question 2 and its evidence (highlighted in green). This motivates our method of storing sentence-wise encodings instead of a single representation for an entire paragraph. where wa 1, wa 2, wa 3 ∈Rd are learned vectors. For each question word, we compute the vector ai: αij = eaij Pnp j=1 eaij , ai = np X j=1 αijcp j. A paragraph-to-question vector ap is computed as follows: mi = max 1≤j≤np aij, βi = emi Pnq i=1 emi ap = nq X i=1 βicq i . We concatenate cq i , ai, cq i ⊙ai and ap ⊙ai and pass the result through a linear layer with ReLU 2300 activations to compute the final bidirectional attention vectors. We also use a residual connection where we process these representations with a bidirectional GRU and another linear layer with ReLU activations. Finally, we sum the outputs of the two linear layers. As before, we derive the ddimensional reformulated question representation ˜q using a max-pooling layer on the outputs of the residual layer. A high-level outline of the reformulation layer is given in Figure 3b, where m contextualized token representations of the question and n contextualized token representations of the paragraph are passed through the component’s layers to produce the reformulated question representation, ˜q. Relevance Scores Given the sentence representations (s1, s2, . . . , sk) of a paragraph P, and the question encoding q for Q, the relevance score of P with respect to a question Q is calculated in the following way: rel(Q, P) = max i=1,...,k σ   si si ⊙q si · q q  ·   w1 w2 w3 w4  + b ! , where w1, w2, w4 ∈Rd and w3, b ∈R are learned parameters. A similar max-pooling encoding approach, along with the scoring layer’s structure, were proposed by Conneau et al. (2017) who showed their efficacy on various sentence-level tasks. We find this sentence-wise formulation to be beneficial because it suffices for one sentence in a paragraph to be relevant to a question for the whole paragraph to be considered as relevant. This allows more fine-grained representations for paragraphs and more accurate retrieval. An example of the benefits of using this kind of sentence-level model is given in Figure 4, where we see two questions answered by two different sentences. Our model allows each question to be similar only to parts of the paragraph, and not necessarily to all of it. Search Vector Derivation Recall that our retrieval algorithm is based on executing a MIPS in the paragraph encoding space. To derive such a search vector from the question encoding q, we observe that: rel(Q, P) ∝max i=1,...,k s⊤ i (w1 + w2 ⊙q + w3 · q). Therefore, the final search vector of a question Q is qs = w1 + w2 ⊙q + w3 · q. The same equations apply when predicting the relevance score for the second retrieval iteration, in which case q is swapped with ˜q. Training and Loss Functions Each training sample consists of a question and two paragraphs, (Q, P 1, P 2), where P 1 corresponds to a paragraph retrieved in the first iteration, and P 2 corresponds to a paragraph retrieved in the second iteration using the reformulated vector ˜q. P 1 is considered relevant if it constitutes one of the necessary evidence paragraphs to answer the question. P 2 is considered relevant only if P 1 and P 2 together constitute the complete set of evidence paragraphs needed to answer the question. Both iterations have the same form of loss functions, and the model is trained by optimizing the sum of the iterations’ losses. Our training objective for each iteration is composed of two components: a binary cross-entropy loss function and a ranking loss function. The cross-entropy loss is defined as follows: LCE = −1 N N X i=1 yi log rel(Qi, Pi)  + (1 −yi) log 1 −rel(Qi, Pi)  , where yi ∈{0, 1} is a binary label indicating the true relevance of Pi to Qi in the iteration in which rel(Qi, Pi) is calculated, and N is the number of samples in the current batch. The ranking loss is computed in the following manner. First, for each question Qi in a given batch, we find the mean of the scores given to positive and negative paragraphs for each question, qpos i = 1 M1 PM1 j=1 rel(Qi, Pj) and qneg i = 1 M2 PM2 j=1 rel(Qi, Pj), where M1 and M2 are the number of positive and negative samples for Qi, respectively. We then define the margin ranking loss (Socher et al., 2013) as LR = 1 M M X i=1 max(0, γ −qpos i + qneg i ), (1) where M is the number of distinct questions in the current batch, and γ is a hyperparameter. The final objective is the sum of the two losses: L = LCE + λLR, (2) where λ is a hyperparameter. 2301 We note that we found it slightly beneficial to incorporate pretrained ELMo (Peters et al., 2018) embeddings in our model. For more detailed information of the implementation details and training process, please refer to Appendix C. 3.2 Paragraph Reader The paragraph reader receives as input a question Q and a paragraph P and extracts the most probable answer span to Q from P. We use the S-norm model proposed by Clark and Gardner (2018). A detailed description of the model is given in Appendix A. Training An input sample for the paragraph reader consists of a question and a single context (Q, P). We optimize the same negative loglikelihood function used in the S-norm model for the span start boundaries: Lstart = −log P j∈P Q P k∈Aj eskj P j∈P Q Pnj i=1 esij ! , where P Q is the set of paragraphs paired with the same question Q, Aj is the set of tokens that start an answer span in the j-th paragraph, and sij is the score given to the i-th token in the j-th paragraph. The same formulation is used for the span end boundaries, so that the final objective function is the sum of the two: Lspan = Lstart + Lend. 4 Experiments and Results We test our approach on two datasets, and measure end-to-end QA performance using the standard exact match (EM) and F1 metrics, as well as the metrics proposed by Yang et al. (2018) for the HotpotQA dataset (see Appendix B). 4.1 Datasets HotpotQA Yang et al. (2018) introduced a dataset of Wikipedia-based questions, which require reasoning over multiple paragraphs to find the correct answer. The dataset also includes hard supervision on sentence-level supporting facts, which encourages the model to give explainable answer predictions. Two benchmark settings are available for this dataset: (1) a distractor setting, in which the reader is given a question as well as a set of paragraphs that includes both the supporting facts and irrelevant paragraphs; (2) a full wiki setting, which is an open-domain version of the dataset. We use this dataset as our benchmark for the multi-hop retrieval setting. Several extensions must be added to the reader from Section 3.2 in order for it to be suitable for the HotpotQA dataset. A detailed description of our proposed extensions is given in Appendix B. SQuAD-Open Chen et al. (2017) decoupled the questions from their corresponding contexts in the original SQuAD dataset (Rajpurkar et al., 2016), and formed an open-domain version of the dataset by defining an entire Wikipedia dump to be the background knowledge source from which the answer to the question should be extracted. We use this dataset to test the effectiveness of our method in a classic single-hop retrieval setting. 4.2 Experimental Setup Search Hyperparameters For our experiments in the multi-hop setting, we used a width of 8 in the first retrieval iteration. In all our experiments, unless stated otherwise, the reader is fed the top 45 paragraphs through which it reasons independently and finds the most probable answers. In addition, we found it beneficial to limit the search space of our MIPS retriever to a subset of the knowledge source, which is determined by a TF-IDF heuristic retriever. We define ni to be the size of the search space for retrieval iteration i. As we will see, there is a trade-off for choosing various values of ni. A large value of ni offers the possibility of higher recall, whereas a small value of ni introduces less noise in the form of irrelevant paragraphs. Knowledege Sources For HotpotQA, our knowledge source is the same Wikipedia version used by Yang et al. (2018)3. This version is a set of all of the first paragraphs in the entire Wikipedia. For SQuAD-Open, we use the same Wikipedia dump used by Chen et al. (2017). For both knowledge sources, the TF-IDF based retriever we use for search space reduction is the one proposed by Chen et al. (2017), which uses bigram hashing and TF-IDF matching. We note that in the HotpotQA Wikipedia version each document is a single paragraph, while in SQuAD-Open, the full Wikipedia documents are used. 3It has recently come to our attention that during our work, some details of the Wikipedia version have changed. Due to time limitations, we use the initial version description. 2302 Setting Method Answer Sup Fact Joint EM F1 EM F1 EM F1 distractor Baseline (Yang et al., 2018) 44.44 58.28 21.95 66.66 11.56 40.86 Our Reader 51.56 65.32 44.54 75.27 28.68 54.08 full wiki Baseline (Yang et al., 2018) 24.68 34.36 5.28 40.98 2.54 17.73 TF-IDF + Reader 27.55 36.58 10.75 42.45 7.00 21.47 MUPPET (sentence-level) 30.20 39.43 16.57 46.13 11.38 26.55 MUPPET (paragraph-level) 31.07 40.42 17.00 47.71 11.76 27.62 Table 1: Primary results for HotpotQA (dev set). At the top of the table, we compare our Paragraph Reader to the baseline model of Yang et al. (2018) (as of writing this paper, no other published results are available other than the baseline results). At the bottom, we compare the end-to-end performance on the full wiki setting. TF-IDF + Reader refers to using the TF-IDF based retriever without our MIPS retriever. MUPPET (sentence-level) refers to our approach with sentence-level representations, and MUPPET (paragraph-level) refers to our approach with paragraph-level representations. For both sentence- and paragraph-level results, we set n1 = 32 and n2 = 512. Method EM F1 DrQA (Chen et al., 2017) 28.4 DrQA (Chen et al., 2017) (multitask) 29.8 R3 (Wang et al., 2018a) 29.1 37.5 DS-QA (Lin et al., 2018) 28.7 36.6 Par. Ranker + Full Agg. (Lee et al., 2018) 30.2 Minimal (Min et al., 2018) 34.7 42.6 Multi-step (Das et al., 2019) 31.9 39.2 BERTserini (Yang et al., 2019) 38.6 46.1 TF-IDF + Reader 34.6 41.6 MUPPET (sentence-level) 39.3 46.2 MUPPET (paragraph-level) 35.6 42.5 Table 2: Primary results for SQuAD-Open. 4.3 Results Primary Results Tables 1 and 2 show our main results on the HotpotQA and SQuAD-Open datasets, respectively. In the HotpotQA distractor setting, our paragraph reader greatly improves the results of the baseline reader, increasing the joint EM and F1 scores by 17.12 (148%) and 13.22 (32%) points, respectively. In the full wiki setting, we compare three methods of retrieval: (1) TF-IDF, in which only the TF-IDF heuristic is used. The reader is fed all possible paragraph pairs from the top-10 paragraphs. (2) Sentencelevel, in which we use MUPPET with sentencelevel encodings. (3) Paragraph-level, in which we use MUPPET with paragraph-level encodings (no sentence information). We can see that both methods significantly outperform the na¨ıve TFIDF retriever, indicating the efficacy of our approach. As of writing this paper, we are placed second in the HotpotQA full wiki setting (test set) leaderboard4. For SQuAD-Open, our sentencelevel method established state-of-the-art results, improving the current non-BERT (Devlin et al., 2018) state-of-the-art by 4.6 (13%) and 3.6 (8%) EM and F1 points, respectively. This shows that our encoder can be useful not only for multi-hop questions, but also for single-hop questions. Retrieval Recall Analysis We analyze the performance of the TF-IDF retriever for HotpotQA in Figure 5a. We can see that the retriever succeeds in retrieving at least one of the gold paragraphs for each question (above 90% with the top32 paragraphs), but fails at retrieving both gold paragraphs. This demonstrates the necessity of an efficient multi-hop retrieval approach to aid or replace classic information retrieval methods. Effect of Narrowing the Search Space In Figures 5b and 5c, we show the performance of our method as a function of the size of the search space of the last retrieval iteration. For SQuADOpen, the TF-IDF retriever initially retrieves a set of documents, which are then split into paragraphs to form the search space. Each search space of top-k paragraphs limits the potential recall of the model to that of the top-k paragraphs retrieved by the TF-IDF retriever. This proves to be suboptimal for very small values of k, as the performance of the TF-IDF retriever is not good enough. Our models, however, fail to benefit from increasing the search space indefinitely, hinting that they are not as robust to noise as we would want them to be. 4March 5, 2019. Leaderboard available at https://hotpotqa.github.io/ 2303 (a) TF-IDF retrieval results (b) SQuAD-Open (c) HotpotQA Figure 5: Various results based on the TF-IDF retriever. (a) Retrieval results of the TF-IDF hueristic retriever on HotpotQA. At Least One @ k is the number of questions for which at least one of the paragraphs containing the supporting facts is retrieved in the top-k paragraphs. Potentially Perfect @ k is the number of questions for which both of the paragraphs containing the supporting facts are retrieved in the top-k paragraphs. (b) and (c) Performance analysis on the SQuAD-Open and HotpotQA datasets, respectively, as more documents/paragraphs are retrieved by the TF-IDF heuristic retriever. Note that for SQuAD-Open each document contains several paragraphs, and the reader is fed the top-k TF-IDF ranked paragraphs from within the documents in the search space. Effectiveness of Sentence-Level Encodings Our method proposes using sentence-level encodings for paragraph retrieval. We test the significance of this approach in Figures 5b and 5c. While sentence-level encodings seem to be vital for improving state-of-the-art results on SQuAD-Open, the same cannot be said about HotpotQA. We hypothesize that this is a consequence of the way the datasets were created. In SQuAD, each paragraph serves as the context of several questions, as shown in Figure 4. This leads to questions being asked about facts less essential to the gist of the paragraph, and thus they would not be encapsulated in a single paragraph representation. In HotpotQA, however, most of the paragraphs in the training set serve as the context of at most one question. 5 Related Work Chen et al. (2017) first introduced the use of neural methods to the task of open-domain QA using a textual knowledge source. They proposed DrQA, a pipeline approach with two components: a TF-IDF based retriever, and a multi-layer neural network that was trained to find an answer span given a question and a paragraph. In an attempt to improve the retrieval of the TF-IDF based component, many existing works have used Distant Supervision (DS) to further re-rank the retrieved paragraphs (Htut et al., 2018; Yan et al., 2018). Wang et al. (2018a) used reinforcement learning to train a re-ranker and an RC component in an end-to-end manner, and showed its advantage over the use of DS alone. Min et al. (2018) trained a sentence selector and demonstrated the effectiveness of reading minimal contexts instead of complete documents. As DS can often lead to wrong labeling, Lin et al. (2018) suggested a denoising method for alleviating this problem. While these methods have proved to increase performance in various open-domain QA datasets, their re-ranking approach is limited in the number of paragraphs it can process, as it requires the joint reading of a question with all possible paragraphs. This is in contrast to our approach, in which all paragraph representations are precomputed to allow efficient large-scale retrieval. There are some works that adopted a similar precomputation scheme. Lee et al. (2018) learned an encoding function for questions and paragraphs and ranked paragraphs by their dot-product similarity with the question. Many of their improvements, however, can be attributed to the incorporation of answer aggregation methods as suggested by Wang et al. (2018b) in their model, which enhanced their results significantly. Seo et al. (2018) proposed phrase-indexed QA (PI-QA), a new formulation of the QA task that requires the independent encoding of answers and questions. The question encodings are then used to retrieve the correct answers by performing MIPS. This is more of a challenge task rather than a solution for opendomain QA. A recent work by Das et al. (2019) proposed a new framework for open-domain QA that employs a multi-step interaction between a retriever and a reader. This interactive framework 2304 is used to refine a question representation in order for the retrieval to be more accurate. Their method is complimentary to ours – the interactive framework is used to enhance retrieval performance for single-hop questions, and does not handle the multi-hop domain. Another line of work reminiscent of our method is the one of Memory Networks (Weston et al., 2015). Memory Networks consist of an array of cells, each capable of storing a vector, and four modules (input, update, output and response) that allow the manipulation of the memory for the task at hand. Many variations of Memory Networks have been proposed, such as end-to-end Memory Networks (Sukhbaatar et al., 2015), Key-Value Memory Networks (Miller et al., 2016), and Hierarchical Memory Networks (Chandar et al., 2016). 6 Concluding Remarks We present MUPPET, a novel method for multihop paragraph retrieval, and show its efficacy in both single- and multi-hop QA datasets. One difficulty in the open-domain multi-hop setting is the lack of supervision, a difficulty that in the singlehop setting is alleviated to some extent by using distant supervision. We hope to tackle this problem in future work to allow learning more than two retrieval iterations. An interesting improvement to our approach would be to allow the retriever to automatically determine whether or not more retrieval iterations are needed. A promising direction could be a multi-task approach, in which both single- and multi-hop datasets are learned jointly. We leave this for future work. Acknowledgments This research was partially supported by the Israel Science Foundation (grant No. 710/18). References Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. 2016. Hierarchical memory networks. arXiv preprint arXiv:1605.07427. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In ACL. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In ACL. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retrieverreader interaction for scalable open-domain question answering. In ICLR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In NAACL-HLT. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In NIPS. Phu Mon Htut, Samuel R. Bowman, and Kyunghyun Cho. 2018. Training a ranking function for opendomain question answering. In NAACL-HLT. Jeff Johnson, Matthijs Douze, and Herv´e J´egou. 2017. Billion-scale similarity search with gpus. CoRR, abs/1702.08734. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In ACL. Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain question answering. In EMNLP. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In ACL. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In EMNLP. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In ACL. 2305 Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In ACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP. Minjoon Seo, Tom Kwiatkowski, Ankur P. Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phraseindexed question answering: A new challenge for scalable document comprehension. In EMNLP. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In NIPS. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR. Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In NIPS. Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In NAACL-HLT. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018a. R3: Reinforced ranker-reader for open-domain question answering. In AAAI. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2018b. Evidence aggregation for answer re-ranking in open-domain question answering. In ICLR. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. TACL. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In ICLR. Ming Yan, Jiangnan Xia, Chen Wu, Bin Bi, Zhongzhou Zhao, Ji Zhang, Luo Si, Rui Wang, Wei Wang, and Haiqing Chen. 2018. A deep cascade model for multi-document reading comprehension. CoRR, abs/1811.11374. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In EMNLP. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. Victor Zhong, Caiming Xiong, Nitish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. In ICLR. 2306 A Paragraph Reader In this section we describe in detail the reader mentioned in Section 3.2. The paragraph reader receives as input a question Q and a paragraph P and extracts the most probable answer span to Q from P. We use the shared-norm model presented by Clark and Gardner (2018), which we refer to as S-norm. The model’s architecture is quite similar to the one we used for the encoder. First, we process Q and P seperately to obtain their contexualized token representations, in the same manner as used in the encoder. We then pass the contextualized representations through a bidirectional attention layer similar to the one defined in the reformulation layer of the encoder, with the only difference being that the roles of the question and the paragraph are switched. As before, we further pass the bidirectional attention representations through a residual connection, this time using a self-attention layer between the bidirectional GRU and the linear layer. The self-attention mechanism is similar to the bidirectional attention layer, only now it is between the paragraph and itself. Therefore, question-to-parargaph attention is not used, and we set aij = −∞if i = j. The summed outputs of the residual connection are passed to the prediction layer. The inputs to the prediction layer are passed through a bidirectional GRU followed by a linear layer that predicts the answer span start scores. The hidden layers of that GRU are concatenated with the input and passed through another bidirectional GRU and linear layer to predict the answer span end scores. Training An input sample for the paragraph reader consists of a question and a single context (Q, P). We optimize the same negative loglikelihood function used in the S-norm model for the span start boundaries: Lstart = −log P j∈P Q P k∈Aj eskj P j∈P Q Pnj i=1 esij ! , where P Q is the set of paragraphs paired with the same question Q, Aj is the set of tokens that start an answer span in the j-th paragraph, and sij is the score given to the i-th token in the j-th paragraph. The same formulation is used for the span end boundaries, so that the final objective function is the sum of the two: Lspan = Lstart + Lend. B Paragraph Reader Extension for HotpotQA HotpotQA presents the challenge of not only predicting an answer span, but also yes/no answers. This is a combination of span-based questions and multiple-choice questions. In addition, one is also required to provide explainability to the answer predictions by predicting the supporting facts leading to the answer. We extend the paragraph reader from Section 3.2 to support these predictions in the following manner. Yes/No Prediction We argue that one can decide whether the answer to a given question should be span-based or yes/no-based without looking at any context at all. Therefore, we first create a fixed-size vector representing the question using max-pooling over the first bidirectional GRU’s states of the question. We pass this representation through a linear layer that predicts whether this is a yes/no-based question or a span-based question. If span-based, we predict the answer span from the context using the original span prediction layer. If yes/no-based, we encode the questionaware context representations to a fixed-size vector by performing max-pooling over the outputs of the residual self-attention layer. As before, we then pass this vector through a linear layer to predict a yes/no answer. Supporting Fact Prediction As a context’s supporting facts for a question are at the sentencelevel, we encode the question-aware context representations to fixed-size sentence representations by passing the outputs of the residual self-attention layer through another bidirectional GRU, followed by performing max-pooling over the sentence groups of the GRU’s outputs. Each sentence representation is then passed through a multilayer perceptron with a single hidden layer equipped with ReLU activations to predict whether it is indeed a supporting fact or not. Training An input sample for the paragraph reader consists of a question and a single context, (Q, P). Nevertheless, as HotpotQA requires multiple paragraphs to answer a question, we define P to be the concatenation of these paragraphs. Our objective function comprises four loss functions, corresponding to the four possible predictions of our model. For the span-based prediction we use Lspan, as before. We use a similar neg2307 ative log likelihood loss for the answer type prediction (whether the answer should be span-based or yes/no-based) and for a yes/no answer prediction: Ltype = −log P j∈P Q estype j P j∈P Q(esbinary j + esspan j ) ! Lyes/no = −log P j∈P Q esyes/no j P j∈P Q(esyes j + esno j ) ! , where P Q is the set of paragraphs paired with the same question Q, and esbinary j , esspan j and estype j are the likelihood scores of the j-th questionparagraph pair being a binary yes/no-based type, a span-based type, and its true type, respectively. esyes j , esno j and esyes/no j are the likelihood scores of the j-th question-paragraph pair having the answer ‘yes’, the answer ‘no’, and its true answer, respectively. For span-based questions, Lyes/no is defined to be zero, and vice-versa. For the supporting fact prediction, we use a binary cross-entropy loss on each sentence, Lsp. The final loss function is the sum of these four objectives, Lhotpot = Lspan + Ltype + Lyes/no + Lsp During inference, the supporting facts prediction is taken only from the paragraph from which the answer is predicted. Metrics Three sets of metrics were proposed by Yang et al. (2018) to evaluate performance on the HotpotQA dataset. The first set of metrics focuses on evaluating the answer span. For this purpose the exact match (EM) and F1 metrics are used, as suggested by Rajpurkar et al. (2016). The second set of metrics focuses on the explainability of the models, by evaluating the supporting facts directly using the EM and F1 metrics on the set of supporting fact sentences. The final set of metrics combines the evaluation of answer spans and supporting facts as follows. For each example, given its precision and recall on the answer span (P (ans), R(ans)) and the supporting facts (P (sup), R(sup)), respectively, the joint F1 is calculated as P (joint) = P (ans)P (sup), R(joint) = R(ans)R(sup), Joint F1 = 2P (joint)R(joint) P (joint) + R(joint) . The joint EM is 1 only if both tasks achieve an exact match and otherwise 0. Intuitively, these metrics penalize systems that perform poorly on either task. All metrics are evaluated example-byexample, and then averaged over examples in the evaluation set. C Implementation Details We use the Stanford CoreNLP toolkit (Manning et al., 2014) for tokenization. We implement all our models using TensorFlow. Architecture Details For the word-level embeddings, we use the GloVe 300-dimensional embeddings pretrained on the 840B Common Crawl corpus (Pennington et al., 2014). For the characterlevel embeddings, we use 20-dimensional character embeddings, and use a 1-dimensional CNN with 100 filters of size 5, with a dropout (Srivastava et al., 2014) rate of 0.2. For the encoder, we also concatenate ELMo (Peters et al., 2018) embeddings with a dropout rate of 0.5 and the token representations from the output of embedding layer to form the final token representations, before processing them through the first bidirectional GRU. We use the ELMo weights pretrained on the 5.5B dataset.5 To speed up computations, we cache the context independent token representations of all tokens that appear at least once in the titles of the HotpotQA Wikipedia version, or appear at least five times in the entire Wikipedia version. Words not in this vocabulary are given a fixed OOV vector. We use a learned weighted average of all three ELMo layers. Variational dropout (Gal and Ghahramani, 2016), where the same dropout mask is applied at each time step, is applied on the inputs of all recurrent layers with a dropout rate of 0.2. We set the encoding size to be d = 1024. For the paragraph reader used for HotpotQA, we use a state size of 150 for the bidirectional GRUs. The size of the hidden layer in the MLP used for supporting fact prediction is set to 150 as well. Here again variational dropout with a dropout rate of 0.2 is applied on the inputs of all recurrent layers and attention mechanisms. The reader used for SQuAD is the shared-norm model trained on the SQuAD dataset by Clark and Gardner (2018).6 5Available at https://allennlp.org/elmo 6Available at https://github.com/allenai/document-qa 2308 Training Details We train all our models using the Adadelta optimizer (Zeiler, 2012) with a learning rate of 1.0 and ρ = 0.95. SQuAD-Open: The training data is gathered as follows. For each question in the original SQuAD dataset, the original paragraph given as the question’s context is considered as the single relevant (positive) paragraph. We gather ∼12 irrelevant (negative) paragraphs for each question in the following manner: • The three paragraphs with the highest TFIDF similarity to the question in the same SQuAD document as the relevant paragraph (excluding the relevant paragraph). The same method is applied to retrieve the three paragraphs most similar to the relevant paragraph. • The two paragraphs with the highest TF-IDF similarity to the question from the set of all first paragraphs in the entire Wikipedia (excluding the relevant paragraph’s article). The same method is applied to retrieve the two paragraphs most similar to the relevant paragraph. • Two randomly sampled paragraphs from the entire Wikipedia. Questions that contain only stop-words are dropped, as they are most likely too dependent on the original context and not suitable for opendomain. In each epoch, a question appears as a training sample four times; once with the relevant paragraph, and three times with randomly sampled irrelevant paragraphs. We train with a batch size of 45, and do not use the ranking loss by setting λ = 0 in Equation (2). We limit the length of the paragraphs to 600 tokens. HotpotQA: The paragraphs used for training the encoder are the gold and distractor paragraphs supplied in the original HotpotQA training set. As mentioned in Section 3.1, each training sample consists of a question and two paragraphs, (Q, P 1, P 2), where P 1 corresponds to a paragraph retrieved in the first iteration, and P 2 corresponds to a paragraph retrieved in the second iteration. For each question, we create the following sample types: 1. Gold: The two paragraphs are the two gold paragraphs of the question. Both P 1 and P 2 are considered positive. 2. First gold, second distractor: P 1 is one of the gold paragraphs and considered positive, while P 2 can be a random paragraph from the training set, the same as P 1, or one of the distractors, with probabilities 0.05, 0.1 and 0.85, respectively. P 2 is considered negative. 3. First distractor, second gold: P 1 is either one of the distractors or a random paragraph from the training set, with probabilities 0.9 and 0.1, respectively. P 2 is one of the gold paragraphs. Both P 1 and P 2 are considered negative. 4. All distractors: Both P 1 and P 2 are sampled from the question’s distractors, and are considered negative. 5. Gold from another question: A gold paragraph pair taken from another question; both paragraphs are considered negative. The use of the sample types from the above list motivation is motivated as follows. Sample type 1 is the only one that contains purely positive examples and hence is mandatory. Sample type 2 is necessary to allow the model to learn a valuable reformulation, which does not give a relevant score based solely on the first paragraph. Sample type 3 is complementary to type 2; it allows the model to learn that a paragraph pair is irrelevant if the first paragraph is irrelevant, regardless of the second. Sample type 3 is used for random negative sampling, which is the most common case of all. Sample type 4 is used to guarantee the model does not determine relevancy solely based on the paragraph pair, but also based on the question. In each training batch, we include three samples for each question in the batch: a single gold sample (type 1), and two samples from the other four types, with sample probabilities of 0.35, 0.35, 0.25 and 0.05, respectively. We use a batch size of 75 (25 unique questions). We set the margin to be γ = 1 in Equation (1) and λ = 1 in Equation (2), for both prediction iterations. We limit the length of the paragraphs to 600 tokens. HotpotQA Reader: The reader receives a question and a concatenation of a paragraph pair as input. Each training batch consists of three samples with three different paragraph pairs for each question: a single gold pair, which is the 2309 two gold paragraphs of the question, and two randomly sampled paragraph pairs from the set of the distractors and one of the gold paragraphs of the question. We label the correct answer spans to be every text span that has an exact match with the ground truth answer, even in the distractor paragraphs. We use a batch size of 75 (25 unique questions), and limit the length of the paragraphs (before concatenation) to 600 tokens.
2019
222
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2310–2320 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2310 E3: Entailment-driven Extracting and Editing for Conversational Machine Reading Victor Zhong University of Washington [email protected] Luke Zettlemoyer University of Washington [email protected] Abstract Conversational machine reading systems help users answer high-level questions (e.g. determine if they qualify for particular government benefits) when they do not know the exact rules by which the determination is made (e.g. whether they need certain income levels or veteran status). The key challenge is that these rules are only provided in the form of a procedural text (e.g. guidelines from government website) which the system must read to figure out what to ask the user. We present a new conversational machine reading model that jointly extracts a set of decision rules from the procedural text while reasoning about which are entailed by the conversational history and which still need to be edited to create questions for the user. On the recently introduced ShARC conversational machine reading dataset, our Entailment-driven Extract and Edit network (E3) achieves a new state-of-theart, outperforming existing systems as well as a new BERT-based baseline. In addition, by explicitly highlighting which information still needs to be gathered, E3 provides a more explainable alternative to prior work. We release source code for our models and experiments at https://github.com/vzhong/e3. 1 Introduction In conversational machine reading (CMR), a system must help users answer high-level questions by participating in an information gathering dialog. For example, in Figure 1 the system asks a series of questions to help the user decide if they need to pay tax on their pension. A key challenge in CMR is that the rules by which the decision is made are only provided in natural language (e.g. the rule text in Figure 1). At every step of the conversation, the system must read the rules text and reason about what has already been said in to formulate the best next question. # 4. Tax when you live abroad If you’re not a UK resident, you don’t usually pay UK tax on your pension. But you might have to pay tax in the country you live in. There are a few exceptions - for example, UK civil service pensions will always be taxed in the UK. I get my money from a business I have. We get our funding from a private bank. Rule text User scenario Do I need to pay UK tax on my pension? Initial user question Are you a UK resident? No Are you receiving UK civil service pensions? Previous question Previous user response Model output Figure 1: A conversational machine reading example. The model is given a rule text document, which contains a recipe of implicit rules (underlined) for answering the initial user question. At the start of the conversation, the user presents a scenario describing their situation. During each turn, the model can ask the user a follow-up question to inquire about missing information, or conclude the dialogue by answering yes, no, or irrelevant. irrelevant means that the rule text cannot answer the question. We show previous turns as well as the corresponding inquired rules in green. The scenario is shown in red and in this case does not correspond to a rule. The model inquiry for this turn and its corresponding rule are shown in blue. We present a new model that jointly reasons about what rules are present in the text and which are already entailed by the conversational history to improve question generation. More specifically, we propose the Entailment-driven Extract and Edit network (E3). E3 learns to extract implicit rules in the document, identify which rules are entailed by the conversation history, and edit rules that are not entailed to create follow-up questions to the user. During each turn, E3 parses the rule text to extract spans in the text that correspond to implicit rules (underlined in Figure 1). Next, the model scores the degree to which each extracted rule is entailed 2311 by the initial user scenario (red in Figure 1) and by previous interactions with the user (green in Figure 1). Finally, the model decides on a response by directly answering the question (yes/no), stating that the rule text does not contain sufficient information to answer the question (irrelevant), or asking a follow-up question about an extracted rule that is not entailed but needed to determine the answer (blue in Figure 1). In the case of inquiry, the model edits an extracted rule into a follow-up question. To our knowledge, E3 is the first extractand-edit method for conversational dialogue, as well as the first method that jointly infers implicit rules in text, estimates entailment, inquires about missing information, and answers the question. We compare E3 to the previous-best systems as well as a new, strong, BERT-based extractive question answering model (BERTQA) on the recently proposed ShARC CMR dataset (Saeidi et al., 2018). Our results show that E3 is more accurate in its decisions and generates more relevant inquiries. In particular, E3 outperforms the previous-best model by 5.7% in micro-averaged decision accuracy and 4.3 in inquiry BLEU4. Similarly, E3 outperforms the BERTQA baseline by 4.0% micro-averaged decision accuracy and 2.4 in inquiry BLEU4. In addition to outperforming previous methods, E3 is explainable in the sense that one can visualize what rules the model extracted and how previous interactions and inquiries ground to the extracted rules. We release source code for E3 and the BERTQA model at https://github.com/vzhong/e3. 2 Related Work Dialogue tasks. Recently, there has been growing interest in question answering (QA) in a dialogue setting (Choi et al., 2018; Reddy et al., 2019). CMR (Saeidi et al., 2018) differs from dialogue QA in the domain covered (regulatory text vs Wikipedia). A consequence of this is that CMR requires the interpretation of complex decision rules in order to answer high-level questions, whereas dialogue QA typically contains questions whose answers are directly extractable from the text. In addition, CMR requires the formulation of free-form follow-up questions in order to identify whether the user satisfies decision rules, whereas dialogue QA does not. There has also been significant work on task-oriented dialogue, where the system must inquire about missing information in order to help the user achieve a goal (Williams et al., 2013; Henderson et al., 2014; Mrkˇsi´c et al., 2017; Young et al., 2013). However, these tasks are typically constrained to a fixed ontology (e.g. restaurant reservation), instead of a latent ontology specified via natural language documents. Dialogue systems. One traditional approach for designing dialogue systems divides the task into language understanding/state-tracking (Mrkˇsi´c et al., 2017; Zhong et al., 2018), reasoning/policy learning (Su et al., 2016), and response generation (Wen et al., 2015). The models for each of these subtasks are then combined to form a full dialogue system (Young et al., 2013; Wen et al., 2017). The previous best system for ShARC (Saeidi et al., 2018) similarly breaks the CMR task into subtasks and combines handdesigned sub-models for decision classification, entailment, and follow-up generation. In contrast, the core reasoning (e.g. non-editor) components of E3 are jointly trained, and does not require complex hand-designed features. Extracting latent rules from text. There is a long history of work on extracting knowledge automatically from text (Moulin and Rousseau, 1992). Relation extraction typically assumes that there is a fixed ontology onto which extracted knowledge falls (Mintz et al., 2009; Riedel et al., 2013). Other works forgo the ontology by using, for example, natural language (Angeli and Manning, 2014; Angeli et al., 2015). These extractions from text are subsequently used for inference over a knowledge base (Bordes et al., 2013; Dettmers et al., 2018; Lin et al., 2018) and rationalizing model predictions (Lei et al., 2016). Our work is more similar with the latter type in which knowledge extracted are not confined to a fixed ontology and instead differ on a document basis. In addition, the rules extracted by our model are used for inference over natural language documents. Finally, these rules provide rationalization for the model’s decision making, in the sense that the user can visualize what rules the model extracted and which rules are entailed by previous turns. 3 Entailment-driven Extract and Edit network In conversational machine reading, a system reads a document that contains a set of implicit decision 2312 Question xQ <latexit sha1_b ase64="ZdiyFL6aYJA/a9Emz+9ere uoRjg=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF48t2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fjp/ mnQHJQrbtVdgKwTLycVyNEYlL/6w5 ilEUrDBNW657mJ8TOqDGcCZ6V+qjG hbEJH2LNU0gi1ny1OnZELqwxJGCtb 0pCF+nsio5HW0yiwnRE1Y73qzcX/v F5qwhs/4zJDUq2XBSmgpiYzP8mQ6 6QGTG1hDLF7a2EjamizNh0SjYEb/ XldK+qnpu1WvWKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA3+o2y</latexit > <latexit sha1_b ase64="ZdiyFL6aYJA/a9Emz+9ere uoRjg=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF48t2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fjp/ mnQHJQrbtVdgKwTLycVyNEYlL/6w5 ilEUrDBNW657mJ8TOqDGcCZ6V+qjG hbEJH2LNU0gi1ny1OnZELqwxJGCtb 0pCF+nsio5HW0yiwnRE1Y73qzcX/v F5qwhs/4zJDUq2XBSmgpiYzP8mQ6 6QGTG1hDLF7a2EjamizNh0SjYEb/ XldK+qnpu1WvWKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA3+o2y</latexit > <latexit sha1_b ase64="ZdiyFL6aYJA/a9Emz+9ere uoRjg=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF48t2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fjp/ mnQHJQrbtVdgKwTLycVyNEYlL/6w5 ilEUrDBNW657mJ8TOqDGcCZ6V+qjG hbEJH2LNU0gi1ny1OnZELqwxJGCtb 0pCF+nsio5HW0yiwnRE1Y73qzcX/v F5qwhs/4zJDUq2XBSmgpiYzP8mQ6 6QGTG1hDLF7a2EjamizNh0SjYEb/ XldK+qnpu1WvWKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA3+o2y</latexit > <latexit sha1_b ase64="ZdiyFL6aYJA/a9Emz+9ere uoRjg=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF48t2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fjp/ mnQHJQrbtVdgKwTLycVyNEYlL/6w5 ilEUrDBNW657mJ8TOqDGcCZ6V+qjG hbEJH2LNU0gi1ny1OnZELqwxJGCtb 0pCF+nsio5HW0yiwnRE1Y73qzcX/v F5qwhs/4zJDUq2XBSmgpiYzP8mQ6 6QGTG1hDLF7a2EjamizNh0SjYEb/ XldK+qnpu1WvWKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA3+o2y</latexit >xQ <latexit sha1_b ase64="ZdiyFL6aYJA/a9Emz+9ere uoRjg=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF48t2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fjp/ mnQHJQrbtVdgKwTLycVyNEYlL/6w5 ilEUrDBNW657mJ8TOqDGcCZ6V+qjG hbEJH2LNU0gi1ny1OnZELqwxJGCtb 0pCF+nsio5HW0yiwnRE1Y73qzcX/v F5qwhs/4zJDUq2XBSmgpiYzP8mQ6 6QGTG1hDLF7a2EjamizNh0SjYEb/ XldK+qnpu1WvWKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA3+o2y</latexit > <latexit sha1_b ase64="ZdiyFL6aYJA/a9Emz+9ere uoRjg=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF48t2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fjp/ mnQHJQrbtVdgKwTLycVyNEYlL/6w5 ilEUrDBNW657mJ8TOqDGcCZ6V+qjG hbEJH2LNU0gi1ny1OnZELqwxJGCtb 0pCF+nsio5HW0yiwnRE1Y73qzcX/v F5qwhs/4zJDUq2XBSmgpiYzP8mQ6 6QGTG1hDLF7a2EjamizNh0SjYEb/ XldK+qnpu1WvWKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA3+o2y</latexit > <latexit sha1_b ase64="ZdiyFL6aYJA/a9Emz+9ere uoRjg=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF48t2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fjp/ mnQHJQrbtVdgKwTLycVyNEYlL/6w5 ilEUrDBNW657mJ8TOqDGcCZ6V+qjG hbEJH2LNU0gi1ny1OnZELqwxJGCtb 0pCF+nsio5HW0yiwnRE1Y73qzcX/v F5qwhs/4zJDUq2XBSmgpiYzP8mQ6 6QGTG1hDLF7a2EjamizNh0SjYEb/ XldK+qnpu1WvWKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA3+o2y</latexit > <latexit sha1_b ase64="ZdiyFL6aYJA/a9Emz+9ere uoRjg=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF48t2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fjp/ mnQHJQrbtVdgKwTLycVyNEYlL/6w5 ilEUrDBNW657mJ8TOqDGcCZ6V+qjG hbEJH2LNU0gi1ny1OnZELqwxJGCtb 0pCF+nsio5HW0yiwnRE1Y73qzcX/v F5qwhs/4zJDUq2XBSmgpiYzP8mQ6 6QGTG1hDLF7a2EjamizNh0SjYEb/ XldK+qnpu1WvWKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA3+o2y</latexit > Rule text xD <latexit sha1_b ase64="I1M3fGSWO3kv4+L5LyBLnh P0+WU=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCHjxWtB/Qh rLZTtqlm03Y3Ygl9Cd48aCIV3+RN/ +N2zYHbX0w8Hhvhpl5QSK4Nq7RT W1jc2t4rbpZ3dvf2D8uFRS8epYthk sYhVJ6AaBZfYNwI7CQKaRQIbAfj6 5nfkSleSwfzCRBP6JDyUPOqLHS/ VP/pl+uFV3DrJKvJxUIEejX/7qDW KWRigNE1Trucmxs+oMpwJnJZ6qca EsjEdYtdSPUfjY/dUrOrDIgYaxs SUPm6u+JjEZaT6LAdkbUjPSyNxP/8 7qpCa/8jMskNSjZYlGYCmJiMvubDL hCZsTEsoUt7cSNqKMmPTKdkQvO WXV0nrouq5Ve+uVqnX8jiKcAKncA4 eXEIdbqEBTWAwhGd4hTdHOC/Ou/Ox aC04+cwx/IHz+QMkRo2l</latexit > <latexit sha1_b ase64="I1M3fGSWO3kv4+L5LyBLnh P0+WU=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCHjxWtB/Qh rLZTtqlm03Y3Ygl9Cd48aCIV3+RN/ +N2zYHbX0w8Hhvhpl5QSK4Nq7RT W1jc2t4rbpZ3dvf2D8uFRS8epYthk sYhVJ6AaBZfYNwI7CQKaRQIbAfj6 5nfkSleSwfzCRBP6JDyUPOqLHS/ VP/pl+uFV3DrJKvJxUIEejX/7qDW KWRigNE1Trucmxs+oMpwJnJZ6qca EsjEdYtdSPUfjY/dUrOrDIgYaxs SUPm6u+JjEZaT6LAdkbUjPSyNxP/8 7qpCa/8jMskNSjZYlGYCmJiMvubDL hCZsTEsoUt7cSNqKMmPTKdkQvO WXV0nrouq5Ve+uVqnX8jiKcAKncA4 eXEIdbqEBTWAwhGd4hTdHOC/Ou/Ox aC04+cwx/IHz+QMkRo2l</latexit > <latexit sha1_b ase64="I1M3fGSWO3kv4+L5LyBLnh P0+WU=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCHjxWtB/Qh rLZTtqlm03Y3Ygl9Cd48aCIV3+RN/ +N2zYHbX0w8Hhvhpl5QSK4Nq7RT W1jc2t4rbpZ3dvf2D8uFRS8epYthk sYhVJ6AaBZfYNwI7CQKaRQIbAfj6 5nfkSleSwfzCRBP6JDyUPOqLHS/ VP/pl+uFV3DrJKvJxUIEejX/7qDW KWRigNE1Trucmxs+oMpwJnJZ6qca EsjEdYtdSPUfjY/dUrOrDIgYaxs SUPm6u+JjEZaT6LAdkbUjPSyNxP/8 7qpCa/8jMskNSjZYlGYCmJiMvubDL hCZsTEsoUt7cSNqKMmPTKdkQvO WXV0nrouq5Ve+uVqnX8jiKcAKncA4 eXEIdbqEBTWAwhGd4hTdHOC/Ou/Ox aC04+cwx/IHz+QMkRo2l</latexit > <latexit sha1_b ase64="I1M3fGSWO3kv4+L5LyBLnh P0+WU=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCHjxWtB/Qh rLZTtqlm03Y3Ygl9Cd48aCIV3+RN/ +N2zYHbX0w8Hhvhpl5QSK4Nq7RT W1jc2t4rbpZ3dvf2D8uFRS8epYthk sYhVJ6AaBZfYNwI7CQKaRQIbAfj6 5nfkSleSwfzCRBP6JDyUPOqLHS/ VP/pl+uFV3DrJKvJxUIEejX/7qDW KWRigNE1Trucmxs+oMpwJnJZ6qca EsjEdYtdSPUfjY/dUrOrDIgYaxs SUPm6u+JjEZaT6LAdkbUjPSyNxP/8 7qpCa/8jMskNSjZYlGYCmJiMvubDL hCZsTEsoUt7cSNqKMmPTKdkQvO WXV0nrouq5Ve+uVqnX8jiKcAKncA4 eXEIdbqEBTWAwhGd4hTdHOC/Ou/Ox aC04+cwx/IHz+QMkRo2l</latexit >xD <latexit sha1_b ase64="I1M3fGSWO3kv4+L5LyBLnh P0+WU=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCHjxWtB/Qh rLZTtqlm03Y3Ygl9Cd48aCIV3+RN/ +N2zYHbX0w8Hhvhpl5QSK4Nq7RT W1jc2t4rbpZ3dvf2D8uFRS8epYthk sYhVJ6AaBZfYNwI7CQKaRQIbAfj6 5nfkSleSwfzCRBP6JDyUPOqLHS/ VP/pl+uFV3DrJKvJxUIEejX/7qDW KWRigNE1Trucmxs+oMpwJnJZ6qca EsjEdYtdSPUfjY/dUrOrDIgYaxs SUPm6u+JjEZaT6LAdkbUjPSyNxP/8 7qpCa/8jMskNSjZYlGYCmJiMvubDL hCZsTEsoUt7cSNqKMmPTKdkQvO WXV0nrouq5Ve+uVqnX8jiKcAKncA4 eXEIdbqEBTWAwhGd4hTdHOC/Ou/Ox aC04+cwx/IHz+QMkRo2l</latexit > <latexit sha1_b ase64="I1M3fGSWO3kv4+L5LyBLnh P0+WU=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCHjxWtB/Qh rLZTtqlm03Y3Ygl9Cd48aCIV3+RN/ +N2zYHbX0w8Hhvhpl5QSK4Nq7RT W1jc2t4rbpZ3dvf2D8uFRS8epYthk sYhVJ6AaBZfYNwI7CQKaRQIbAfj6 5nfkSleSwfzCRBP6JDyUPOqLHS/ VP/pl+uFV3DrJKvJxUIEejX/7qDW KWRigNE1Trucmxs+oMpwJnJZ6qca EsjEdYtdSPUfjY/dUrOrDIgYaxs SUPm6u+JjEZaT6LAdkbUjPSyNxP/8 7qpCa/8jMskNSjZYlGYCmJiMvubDL hCZsTEsoUt7cSNqKMmPTKdkQvO WXV0nrouq5Ve+uVqnX8jiKcAKncA4 eXEIdbqEBTWAwhGd4hTdHOC/Ou/Ox aC04+cwx/IHz+QMkRo2l</latexit > <latexit sha1_b ase64="I1M3fGSWO3kv4+L5LyBLnh P0+WU=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCHjxWtB/Qh rLZTtqlm03Y3Ygl9Cd48aCIV3+RN/ +N2zYHbX0w8Hhvhpl5QSK4Nq7RT W1jc2t4rbpZ3dvf2D8uFRS8epYthk sYhVJ6AaBZfYNwI7CQKaRQIbAfj6 5nfkSleSwfzCRBP6JDyUPOqLHS/ VP/pl+uFV3DrJKvJxUIEejX/7qDW KWRigNE1Trucmxs+oMpwJnJZ6qca EsjEdYtdSPUfjY/dUrOrDIgYaxs SUPm6u+JjEZaT6LAdkbUjPSyNxP/8 7qpCa/8jMskNSjZYlGYCmJiMvubDL hCZsTEsoUt7cSNqKMmPTKdkQvO WXV0nrouq5Ve+uVqnX8jiKcAKncA4 eXEIdbqEBTWAwhGd4hTdHOC/Ou/Ox aC04+cwx/IHz+QMkRo2l</latexit > <latexit sha1_b ase64="I1M3fGSWO3kv4+L5LyBLnh P0+WU=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCHjxWtB/Qh rLZTtqlm03Y3Ygl9Cd48aCIV3+RN/ +N2zYHbX0w8Hhvhpl5QSK4Nq7RT W1jc2t4rbpZ3dvf2D8uFRS8epYthk sYhVJ6AaBZfYNwI7CQKaRQIbAfj6 5nfkSleSwfzCRBP6JDyUPOqLHS/ VP/pl+uFV3DrJKvJxUIEejX/7qDW KWRigNE1Trucmxs+oMpwJnJZ6qca EsjEdYtdSPUfjY/dUrOrDIgYaxs SUPm6u+JjEZaT6LAdkbUjPSyNxP/8 7qpCa/8jMskNSjZYlGYCmJiMvubDL hCZsTEsoUt7cSNqKMmPTKdkQvO WXV0nrouq5Ve+uVqnX8jiKcAKncA4 eXEIdbqEBTWAwhGd4hTdHOC/Ou/Ox aC04+cwx/IHz+QMkRo2l</latexit > Scenario xS <latexit sha1_b ase64="vq3QVi8kDwnF8pWB4L82k+ 3HLrQ=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF4+V2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fip+ TRoDsoVt+ouQNaJl5MK5GgMyl/9Yc zSCKVhgmrd89zE+BlVhjOBs1I/1Zh QNqEj7FkqaYTazxanzsiFVYkjJUt achC/T2R0UjraRTYzoiasV715uJ/X i814Y2fcZmkBiVbLgpTQUxM5n+TIV fIjJhaQpni9lbCxlRZmw6JRuCt/ ryOmlfVT236t3XKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA7Ao20</latexit > <latexit sha1_b ase64="vq3QVi8kDwnF8pWB4L82k+ 3HLrQ=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF4+V2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fip+ TRoDsoVt+ouQNaJl5MK5GgMyl/9Yc zSCKVhgmrd89zE+BlVhjOBs1I/1Zh QNqEj7FkqaYTazxanzsiFVYkjJUt achC/T2R0UjraRTYzoiasV715uJ/X i814Y2fcZmkBiVbLgpTQUxM5n+TIV fIjJhaQpni9lbCxlRZmw6JRuCt/ ryOmlfVT236t3XKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA7Ao20</latexit > <latexit sha1_b ase64="vq3QVi8kDwnF8pWB4L82k+ 3HLrQ=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF4+V2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fip+ TRoDsoVt+ouQNaJl5MK5GgMyl/9Yc zSCKVhgmrd89zE+BlVhjOBs1I/1Zh QNqEj7FkqaYTazxanzsiFVYkjJUt achC/T2R0UjraRTYzoiasV715uJ/X i814Y2fcZmkBiVbLgpTQUxM5n+TIV fIjJhaQpni9lbCxlRZmw6JRuCt/ ryOmlfVT236t3XKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA7Ao20</latexit > <latexit sha1_b ase64="vq3QVi8kDwnF8pWB4L82k+ 3HLrQ=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF4+V2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fip+ TRoDsoVt+ouQNaJl5MK5GgMyl/9Yc zSCKVhgmrd89zE+BlVhjOBs1I/1Zh QNqEj7FkqaYTazxanzsiFVYkjJUt achC/T2R0UjraRTYzoiasV715uJ/X i814Y2fcZmkBiVbLgpTQUxM5n+TIV fIjJhaQpni9lbCxlRZmw6JRuCt/ ryOmlfVT236t3XKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA7Ao20</latexit >xS <latexit sha1_b ase64="vq3QVi8kDwnF8pWB4L82k+ 3HLrQ=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF4+V2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fip+ TRoDsoVt+ouQNaJl5MK5GgMyl/9Yc zSCKVhgmrd89zE+BlVhjOBs1I/1Zh QNqEj7FkqaYTazxanzsiFVYkjJUt achC/T2R0UjraRTYzoiasV715uJ/X i814Y2fcZmkBiVbLgpTQUxM5n+TIV fIjJhaQpni9lbCxlRZmw6JRuCt/ ryOmlfVT236t3XKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA7Ao20</latexit > <latexit sha1_b ase64="vq3QVi8kDwnF8pWB4L82k+ 3HLrQ=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF4+V2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fip+ TRoDsoVt+ouQNaJl5MK5GgMyl/9Yc zSCKVhgmrd89zE+BlVhjOBs1I/1Zh QNqEj7FkqaYTazxanzsiFVYkjJUt achC/T2R0UjraRTYzoiasV715uJ/X i814Y2fcZmkBiVbLgpTQUxM5n+TIV fIjJhaQpni9lbCxlRZmw6JRuCt/ ryOmlfVT236t3XKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA7Ao20</latexit > <latexit sha1_b ase64="vq3QVi8kDwnF8pWB4L82k+ 3HLrQ=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF4+V2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fip+ TRoDsoVt+ouQNaJl5MK5GgMyl/9Yc zSCKVhgmrd89zE+BlVhjOBs1I/1Zh QNqEj7FkqaYTazxanzsiFVYkjJUt achC/T2R0UjraRTYzoiasV715uJ/X i814Y2fcZmkBiVbLgpTQUxM5n+TIV fIjJhaQpni9lbCxlRZmw6JRuCt/ ryOmlfVT236t3XKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA7Ao20</latexit > <latexit sha1_b ase64="vq3QVi8kDwnF8pWB4L82k+ 3HLrQ=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mkoMeCF4+V2g9oQ 9lsJ+3SzSbsbsQS+hO8eFDEq7/Im/ /GbZuDtj4YeLw3w8y8IBFcG9f9dgo bm1vbO8Xd0t7+weFR+fikreNUMWyx WMSqG1CNgktsGW4EdhOFNAoEdoLJ7 dzvPKLSPJYPZpqgH9GR5CFn1Fip+ TRoDsoVt+ouQNaJl5MK5GgMyl/9Yc zSCKVhgmrd89zE+BlVhjOBs1I/1Zh QNqEj7FkqaYTazxanzsiFVYkjJUt achC/T2R0UjraRTYzoiasV715uJ/X i814Y2fcZmkBiVbLgpTQUxM5n+TIV fIjJhaQpni9lbCxlRZmw6JRuCt/ ryOmlfVT236t3XKvVaHkcRzuAcLsG Da6jDHTSgBQxG8Ayv8OYI58V5dz6W rQUnzmFP3A+fwA7Ao20</latexit > Follow-up QA xH,1 <latexit sha1_b ase64="CU4DPryCjv9j50xB8+hIT8 GTuTQ=">AB7nicbVBNS8NAEJ3Ur 1q/qh69LBbBg5RECnoseOmxgv2AN pTNdtMu3WzC7kQsoT/CiwdFvPp7vP lv3LY5aOuDgcd7M8zMCxIpDLrut1P Y2Nza3inulvb2Dw6PyscnbROnmvEW i2WsuwE1XArFWyhQ8m6iOY0CyTvB5 G7udx65NiJWDzhNuB/RkRKhYBSt1 HkaZI0rbzYoV9yquwBZJ15OKpCjOS h/9YcxSyOukElqTM9zE/QzqlEwyWe lfmp4QtmEjnjPUkUjbvxsce6MXFhl SMJY21JIFurviYxGxkyjwHZGFMdm1 ZuL/3m9FMNbPxMqSZErtlwUpJgTO a/k6HQnKGcWkKZFvZWwsZU4Y2oZ INwVt9eZ20r6ueW/Xua5V6LY+jCGd wDpfgwQ3UoQFNaAGDCTzDK7w5ifPi vDsfy9aCk8+cwh84nz/JnY8m</lat exit> <latexit sha1_b ase64="CU4DPryCjv9j50xB8+hIT8 GTuTQ=">AB7nicbVBNS8NAEJ3Ur 1q/qh69LBbBg5RECnoseOmxgv2AN pTNdtMu3WzC7kQsoT/CiwdFvPp7vP lv3LY5aOuDgcd7M8zMCxIpDLrut1P Y2Nza3inulvb2Dw6PyscnbROnmvEW i2WsuwE1XArFWyhQ8m6iOY0CyTvB5 G7udx65NiJWDzhNuB/RkRKhYBSt1 HkaZI0rbzYoV9yquwBZJ15OKpCjOS h/9YcxSyOukElqTM9zE/QzqlEwyWe lfmp4QtmEjnjPUkUjbvxsce6MXFhl SMJY21JIFurviYxGxkyjwHZGFMdm1 ZuL/3m9FMNbPxMqSZErtlwUpJgTO a/k6HQnKGcWkKZFvZWwsZU4Y2oZ INwVt9eZ20r6ueW/Xua5V6LY+jCGd wDpfgwQ3UoQFNaAGDCTzDK7w5ifPi vDsfy9aCk8+cwh84nz/JnY8m</lat exit> <latexit sha1_b ase64="CU4DPryCjv9j50xB8+hIT8 GTuTQ=">AB7nicbVBNS8NAEJ3Ur 1q/qh69LBbBg5RECnoseOmxgv2AN pTNdtMu3WzC7kQsoT/CiwdFvPp7vP lv3LY5aOuDgcd7M8zMCxIpDLrut1P Y2Nza3inulvb2Dw6PyscnbROnmvEW i2WsuwE1XArFWyhQ8m6iOY0CyTvB5 G7udx65NiJWDzhNuB/RkRKhYBSt1 HkaZI0rbzYoV9yquwBZJ15OKpCjOS h/9YcxSyOukElqTM9zE/QzqlEwyWe lfmp4QtmEjnjPUkUjbvxsce6MXFhl SMJY21JIFurviYxGxkyjwHZGFMdm1 ZuL/3m9FMNbPxMqSZErtlwUpJgTO a/k6HQnKGcWkKZFvZWwsZU4Y2oZ INwVt9eZ20r6ueW/Xua5V6LY+jCGd wDpfgwQ3UoQFNaAGDCTzDK7w5ifPi vDsfy9aCk8+cwh84nz/JnY8m</lat exit> <latexit sha1_b ase64="CU4DPryCjv9j50xB8+hIT8 GTuTQ=">AB7nicbVBNS8NAEJ3Ur 1q/qh69LBbBg5RECnoseOmxgv2AN pTNdtMu3WzC7kQsoT/CiwdFvPp7vP lv3LY5aOuDgcd7M8zMCxIpDLrut1P Y2Nza3inulvb2Dw6PyscnbROnmvEW i2WsuwE1XArFWyhQ8m6iOY0CyTvB5 G7udx65NiJWDzhNuB/RkRKhYBSt1 HkaZI0rbzYoV9yquwBZJ15OKpCjOS h/9YcxSyOukElqTM9zE/QzqlEwyWe lfmp4QtmEjnjPUkUjbvxsce6MXFhl SMJY21JIFurviYxGxkyjwHZGFMdm1 ZuL/3m9FMNbPxMqSZErtlwUpJgTO a/k6HQnKGcWkKZFvZWwsZU4Y2oZ INwVt9eZ20r6ueW/Xua5V6LY+jCGd wDpfgwQ3UoQFNaAGDCTzDK7w5ifPi vDsfy9aCk8+cwh84nz/JnY8m</lat exit>xH,1 <latexit sha1_b ase64="CU4DPryCjv9j50xB8+hIT8 GTuTQ=">AB7nicbVBNS8NAEJ3Ur 1q/qh69LBbBg5RECnoseOmxgv2AN pTNdtMu3WzC7kQsoT/CiwdFvPp7vP lv3LY5aOuDgcd7M8zMCxIpDLrut1P Y2Nza3inulvb2Dw6PyscnbROnmvEW i2WsuwE1XArFWyhQ8m6iOY0CyTvB5 G7udx65NiJWDzhNuB/RkRKhYBSt1 HkaZI0rbzYoV9yquwBZJ15OKpCjOS h/9YcxSyOukElqTM9zE/QzqlEwyWe lfmp4QtmEjnjPUkUjbvxsce6MXFhl SMJY21JIFurviYxGxkyjwHZGFMdm1 ZuL/3m9FMNbPxMqSZErtlwUpJgTO a/k6HQnKGcWkKZFvZWwsZU4Y2oZ INwVt9eZ20r6ueW/Xua5V6LY+jCGd wDpfgwQ3UoQFNaAGDCTzDK7w5ifPi vDsfy9aCk8+cwh84nz/JnY8m</lat exit> <latexit sha1_b ase64="CU4DPryCjv9j50xB8+hIT8 GTuTQ=">AB7nicbVBNS8NAEJ3Ur 1q/qh69LBbBg5RECnoseOmxgv2AN pTNdtMu3WzC7kQsoT/CiwdFvPp7vP lv3LY5aOuDgcd7M8zMCxIpDLrut1P Y2Nza3inulvb2Dw6PyscnbROnmvEW i2WsuwE1XArFWyhQ8m6iOY0CyTvB5 G7udx65NiJWDzhNuB/RkRKhYBSt1 HkaZI0rbzYoV9yquwBZJ15OKpCjOS h/9YcxSyOukElqTM9zE/QzqlEwyWe lfmp4QtmEjnjPUkUjbvxsce6MXFhl SMJY21JIFurviYxGxkyjwHZGFMdm1 ZuL/3m9FMNbPxMqSZErtlwUpJgTO a/k6HQnKGcWkKZFvZWwsZU4Y2oZ INwVt9eZ20r6ueW/Xua5V6LY+jCGd wDpfgwQ3UoQFNaAGDCTzDK7w5ifPi vDsfy9aCk8+cwh84nz/JnY8m</lat exit> <latexit sha1_b ase64="CU4DPryCjv9j50xB8+hIT8 GTuTQ=">AB7nicbVBNS8NAEJ3Ur 1q/qh69LBbBg5RECnoseOmxgv2AN pTNdtMu3WzC7kQsoT/CiwdFvPp7vP lv3LY5aOuDgcd7M8zMCxIpDLrut1P Y2Nza3inulvb2Dw6PyscnbROnmvEW i2WsuwE1XArFWyhQ8m6iOY0CyTvB5 G7udx65NiJWDzhNuB/RkRKhYBSt1 HkaZI0rbzYoV9yquwBZJ15OKpCjOS h/9YcxSyOukElqTM9zE/QzqlEwyWe lfmp4QtmEjnjPUkUjbvxsce6MXFhl SMJY21JIFurviYxGxkyjwHZGFMdm1 ZuL/3m9FMNbPxMqSZErtlwUpJgTO a/k6HQnKGcWkKZFvZWwsZU4Y2oZ INwVt9eZ20r6ueW/Xua5V6LY+jCGd wDpfgwQ3UoQFNaAGDCTzDK7w5ifPi vDsfy9aCk8+cwh84nz/JnY8m</lat exit> <latexit sha1_b ase64="CU4DPryCjv9j50xB8+hIT8 GTuTQ=">AB7nicbVBNS8NAEJ3Ur 1q/qh69LBbBg5RECnoseOmxgv2AN pTNdtMu3WzC7kQsoT/CiwdFvPp7vP lv3LY5aOuDgcd7M8zMCxIpDLrut1P Y2Nza3inulvb2Dw6PyscnbROnmvEW i2WsuwE1XArFWyhQ8m6iOY0CyTvB5 G7udx65NiJWDzhNuB/RkRKhYBSt1 HkaZI0rbzYoV9yquwBZJ15OKpCjOS h/9YcxSyOukElqTM9zE/QzqlEwyWe lfmp4QtmEjnjPUkUjbvxsce6MXFhl SMJY21JIFurviYxGxkyjwHZGFMdm1 ZuL/3m9FMNbPxMqSZErtlwUpJgTO a/k6HQnKGcWkKZFvZWwsZU4Y2oZ INwVt9eZ20r6ueW/Xua5V6LY+jCGd wDpfgwQ3UoQFNaAGDCTzDK7w5ifPi vDsfy9aCk8+cwh84nz/JnY8m</lat exit> BERT Transformer encoder Rule extraction layer Follow-up QA xH,2 <latexit sha1_b ase64="enF29fJV1ijw/HaUyBVoxr ns8M0=">AB7nicbVBNS8NAEJ34W etX1aOXxSJ4kJKUgh4LXnqsYD+gD WznbRLN5uwuxFL6I/w4kERr/4eb/ 4bt20O2vpg4PHeDPzgkRwbVz329n Y3Nre2S3sFfcPDo+OSyenbR2nimGL xSJW3YBqFxiy3AjsJsopFEgsBNM7 uZ+5xGV5rF8MNME/YiOJA85o8ZKn adB1riuzgalsltxFyDrxMtJGXI0B6 Wv/jBmaYTSMEG17nluYvyMKsOZwFm xn2pMKJvQEfYslTRC7WeLc2fk0ipD EsbKljRkof6eyGik9TQKbGdEzVive nPxP6+XmvDWz7hMUoOSLReFqSAmJv PfyZArZEZMLaFMcXsrYWOqKDM2oa INwVt9eZ20qxXPrXj3tXK9lsdRgHO 4gCvw4Abq0IAmtIDBJ7hFd6cxHlx 3p2PZeuGk8+cwR84nz/LIo8n</lat exit> <latexit sha1_b ase64="enF29fJV1ijw/HaUyBVoxr ns8M0=">AB7nicbVBNS8NAEJ34W etX1aOXxSJ4kJKUgh4LXnqsYD+gD WznbRLN5uwuxFL6I/w4kERr/4eb/ 4bt20O2vpg4PHeDPzgkRwbVz329n Y3Nre2S3sFfcPDo+OSyenbR2nimGL xSJW3YBqFxiy3AjsJsopFEgsBNM7 uZ+5xGV5rF8MNME/YiOJA85o8ZKn adB1riuzgalsltxFyDrxMtJGXI0B6 Wv/jBmaYTSMEG17nluYvyMKsOZwFm xn2pMKJvQEfYslTRC7WeLc2fk0ipD EsbKljRkof6eyGik9TQKbGdEzVive nPxP6+XmvDWz7hMUoOSLReFqSAmJv PfyZArZEZMLaFMcXsrYWOqKDM2oa INwVt9eZ20qxXPrXj3tXK9lsdRgHO 4gCvw4Abq0IAmtIDBJ7hFd6cxHlx 3p2PZeuGk8+cwR84nz/LIo8n</lat exit> <latexit sha1_b ase64="enF29fJV1ijw/HaUyBVoxr ns8M0=">AB7nicbVBNS8NAEJ34W etX1aOXxSJ4kJKUgh4LXnqsYD+gD WznbRLN5uwuxFL6I/w4kERr/4eb/ 4bt20O2vpg4PHeDPzgkRwbVz329n Y3Nre2S3sFfcPDo+OSyenbR2nimGL xSJW3YBqFxiy3AjsJsopFEgsBNM7 uZ+5xGV5rF8MNME/YiOJA85o8ZKn adB1riuzgalsltxFyDrxMtJGXI0B6 Wv/jBmaYTSMEG17nluYvyMKsOZwFm xn2pMKJvQEfYslTRC7WeLc2fk0ipD EsbKljRkof6eyGik9TQKbGdEzVive nPxP6+XmvDWz7hMUoOSLReFqSAmJv PfyZArZEZMLaFMcXsrYWOqKDM2oa INwVt9eZ20qxXPrXj3tXK9lsdRgHO 4gCvw4Abq0IAmtIDBJ7hFd6cxHlx 3p2PZeuGk8+cwR84nz/LIo8n</lat exit> <latexit sha1_b ase64="enF29fJV1ijw/HaUyBVoxr ns8M0=">AB7nicbVBNS8NAEJ34W etX1aOXxSJ4kJKUgh4LXnqsYD+gD WznbRLN5uwuxFL6I/w4kERr/4eb/ 4bt20O2vpg4PHeDPzgkRwbVz329n Y3Nre2S3sFfcPDo+OSyenbR2nimGL xSJW3YBqFxiy3AjsJsopFEgsBNM7 uZ+5xGV5rF8MNME/YiOJA85o8ZKn adB1riuzgalsltxFyDrxMtJGXI0B6 Wv/jBmaYTSMEG17nluYvyMKsOZwFm xn2pMKJvQEfYslTRC7WeLc2fk0ipD EsbKljRkof6eyGik9TQKbGdEzVive nPxP6+XmvDWz7hMUoOSLReFqSAmJv PfyZArZEZMLaFMcXsrYWOqKDM2oa INwVt9eZ20qxXPrXj3tXK9lsdRgHO 4gCvw4Abq0IAmtIDBJ7hFd6cxHlx 3p2PZeuGk8+cwR84nz/LIo8n</lat exit>xH,2 <latexit sha1_b ase64="enF29fJV1ijw/HaUyBVoxr ns8M0=">AB7nicbVBNS8NAEJ34W etX1aOXxSJ4kJKUgh4LXnqsYD+gD WznbRLN5uwuxFL6I/w4kERr/4eb/ 4bt20O2vpg4PHeDPzgkRwbVz329n Y3Nre2S3sFfcPDo+OSyenbR2nimGL xSJW3YBqFxiy3AjsJsopFEgsBNM7 uZ+5xGV5rF8MNME/YiOJA85o8ZKn adB1riuzgalsltxFyDrxMtJGXI0B6 Wv/jBmaYTSMEG17nluYvyMKsOZwFm xn2pMKJvQEfYslTRC7WeLc2fk0ipD EsbKljRkof6eyGik9TQKbGdEzVive nPxP6+XmvDWz7hMUoOSLReFqSAmJv PfyZArZEZMLaFMcXsrYWOqKDM2oa INwVt9eZ20qxXPrXj3tXK9lsdRgHO 4gCvw4Abq0IAmtIDBJ7hFd6cxHlx 3p2PZeuGk8+cwR84nz/LIo8n</lat exit> <latexit sha1_b ase64="enF29fJV1ijw/HaUyBVoxr ns8M0=">AB7nicbVBNS8NAEJ34W etX1aOXxSJ4kJKUgh4LXnqsYD+gD WznbRLN5uwuxFL6I/w4kERr/4eb/ 4bt20O2vpg4PHeDPzgkRwbVz329n Y3Nre2S3sFfcPDo+OSyenbR2nimGL xSJW3YBqFxiy3AjsJsopFEgsBNM7 uZ+5xGV5rF8MNME/YiOJA85o8ZKn adB1riuzgalsltxFyDrxMtJGXI0B6 Wv/jBmaYTSMEG17nluYvyMKsOZwFm xn2pMKJvQEfYslTRC7WeLc2fk0ipD EsbKljRkof6eyGik9TQKbGdEzVive nPxP6+XmvDWz7hMUoOSLReFqSAmJv PfyZArZEZMLaFMcXsrYWOqKDM2oa INwVt9eZ20qxXPrXj3tXK9lsdRgHO 4gCvw4Abq0IAmtIDBJ7hFd6cxHlx 3p2PZeuGk8+cwR84nz/LIo8n</lat exit> <latexit sha1_b ase64="enF29fJV1ijw/HaUyBVoxr ns8M0=">AB7nicbVBNS8NAEJ34W etX1aOXxSJ4kJKUgh4LXnqsYD+gD WznbRLN5uwuxFL6I/w4kERr/4eb/ 4bt20O2vpg4PHeDPzgkRwbVz329n Y3Nre2S3sFfcPDo+OSyenbR2nimGL xSJW3YBqFxiy3AjsJsopFEgsBNM7 uZ+5xGV5rF8MNME/YiOJA85o8ZKn adB1riuzgalsltxFyDrxMtJGXI0B6 Wv/jBmaYTSMEG17nluYvyMKsOZwFm xn2pMKJvQEfYslTRC7WeLc2fk0ipD EsbKljRkof6eyGik9TQKbGdEzVive nPxP6+XmvDWz7hMUoOSLReFqSAmJv PfyZArZEZMLaFMcXsrYWOqKDM2oa INwVt9eZ20qxXPrXj3tXK9lsdRgHO 4gCvw4Abq0IAmtIDBJ7hFd6cxHlx 3p2PZeuGk8+cwR84nz/LIo8n</lat exit> <latexit sha1_b ase64="enF29fJV1ijw/HaUyBVoxr ns8M0=">AB7nicbVBNS8NAEJ34W etX1aOXxSJ4kJKUgh4LXnqsYD+gD WznbRLN5uwuxFL6I/w4kERr/4eb/ 4bt20O2vpg4PHeDPzgkRwbVz329n Y3Nre2S3sFfcPDo+OSyenbR2nimGL xSJW3YBqFxiy3AjsJsopFEgsBNM7 uZ+5xGV5rF8MNME/YiOJA85o8ZKn adB1riuzgalsltxFyDrxMtJGXI0B6 Wv/jBmaYTSMEG17nluYvyMKsOZwFm xn2pMKJvQEfYslTRC7WeLc2fk0ipD EsbKljRkof6eyGik9TQKbGdEzVive nPxP6+XmvDWz7hMUoOSLReFqSAmJv PfyZArZEZMLaFMcXsrYWOqKDM2oa INwVt9eZ20qxXPrXj3tXK9lsdRgHO 4gCvw4Abq0IAmtIDBJ7hFd6cxHlx 3p2PZeuGk8+cwR84nz/LIo8n</lat exit> Follow-up QA xH,nH <latexit sha1_b ase64="sAKQk4fv24rOzgAwMxhTm9 Wsx54=">AB8HicbVBNSwMxEJ2tX 7V+VT16CRbBg5RdKdRjwUuPFeyHt MuSTbNtaJdkqxYlv4KLx4U8erP8e a/MW3oK0PBh7vzTAzL0w408Z1v53 CxubW9k5xt7S3f3B4VD4+6eg4VYS2 Scxj1QuxpxJ2jbMcNpLFMUi5LQbT m7nfveRKs1ieW+mCfUFHkWMYKNl R6egqx5JYPmLChX3Kq7AFonXk4qkK MVlL8Gw5ikgkpDONa67mJ8TOsDCO czkqDVNMEkwke0b6lEguq/Wx8Axd WGWIoljZkgYt1N8TGRZaT0VoOwU2Y 73qzcX/vH5qohs/YzJDZVkuShKOT Ixmn+PhkxRYvjUEkwUs7ciMsYKE2 MzKtkQvNWX10nuq5Ve+uVmnU8ji KcAbncAke1KEBTWhBGwgIeIZXeHOU 8+K8Ox/L1oKTz5zCHzifP3LYkB4=< /latexit> <latexit sha1_b ase64="sAKQk4fv24rOzgAwMxhTm9 Wsx54=">AB8HicbVBNSwMxEJ2tX 7V+VT16CRbBg5RdKdRjwUuPFeyHt MuSTbNtaJdkqxYlv4KLx4U8erP8e a/MW3oK0PBh7vzTAzL0w408Z1v53 CxubW9k5xt7S3f3B4VD4+6eg4VYS2 Scxj1QuxpxJ2jbMcNpLFMUi5LQbT m7nfveRKs1ieW+mCfUFHkWMYKNl R6egqx5JYPmLChX3Kq7AFonXk4qkK MVlL8Gw5ikgkpDONa67mJ8TOsDCO czkqDVNMEkwke0b6lEguq/Wx8Axd WGWIoljZkgYt1N8TGRZaT0VoOwU2Y 73qzcX/vH5qohs/YzJDZVkuShKOT Ixmn+PhkxRYvjUEkwUs7ciMsYKE2 MzKtkQvNWX10nuq5Ve+uVmnU8ji KcAbncAke1KEBTWhBGwgIeIZXeHOU 8+K8Ox/L1oKTz5zCHzifP3LYkB4=< /latexit> <latexit sha1_b ase64="sAKQk4fv24rOzgAwMxhTm9 Wsx54=">AB8HicbVBNSwMxEJ2tX 7V+VT16CRbBg5RdKdRjwUuPFeyHt MuSTbNtaJdkqxYlv4KLx4U8erP8e a/MW3oK0PBh7vzTAzL0w408Z1v53 CxubW9k5xt7S3f3B4VD4+6eg4VYS2 Scxj1QuxpxJ2jbMcNpLFMUi5LQbT m7nfveRKs1ieW+mCfUFHkWMYKNl R6egqx5JYPmLChX3Kq7AFonXk4qkK MVlL8Gw5ikgkpDONa67mJ8TOsDCO czkqDVNMEkwke0b6lEguq/Wx8Axd WGWIoljZkgYt1N8TGRZaT0VoOwU2Y 73qzcX/vH5qohs/YzJDZVkuShKOT Ixmn+PhkxRYvjUEkwUs7ciMsYKE2 MzKtkQvNWX10nuq5Ve+uVmnU8ji KcAbncAke1KEBTWhBGwgIeIZXeHOU 8+K8Ox/L1oKTz5zCHzifP3LYkB4=< /latexit> <latexit sha1_b ase64="sAKQk4fv24rOzgAwMxhTm9 Wsx54=">AB8HicbVBNSwMxEJ2tX 7V+VT16CRbBg5RdKdRjwUuPFeyHt MuSTbNtaJdkqxYlv4KLx4U8erP8e a/MW3oK0PBh7vzTAzL0w408Z1v53 CxubW9k5xt7S3f3B4VD4+6eg4VYS2 Scxj1QuxpxJ2jbMcNpLFMUi5LQbT m7nfveRKs1ieW+mCfUFHkWMYKNl R6egqx5JYPmLChX3Kq7AFonXk4qkK MVlL8Gw5ikgkpDONa67mJ8TOsDCO czkqDVNMEkwke0b6lEguq/Wx8Axd WGWIoljZkgYt1N8TGRZaT0VoOwU2Y 73qzcX/vH5qohs/YzJDZVkuShKOT Ixmn+PhkxRYvjUEkwUs7ciMsYKE2 MzKtkQvNWX10nuq5Ve+uVmnU8ji KcAbncAke1KEBTWhBGwgIeIZXeHOU 8+K8Ox/L1oKTz5zCHzifP3LYkB4=< /latexit>xH,nH <latexit sha1_b ase64="sAKQk4fv24rOzgAwMxhTm9 Wsx54=">AB8HicbVBNSwMxEJ2tX 7V+VT16CRbBg5RdKdRjwUuPFeyHt MuSTbNtaJdkqxYlv4KLx4U8erP8e a/MW3oK0PBh7vzTAzL0w408Z1v53 CxubW9k5xt7S3f3B4VD4+6eg4VYS2 Scxj1QuxpxJ2jbMcNpLFMUi5LQbT m7nfveRKs1ieW+mCfUFHkWMYKNl R6egqx5JYPmLChX3Kq7AFonXk4qkK MVlL8Gw5ikgkpDONa67mJ8TOsDCO czkqDVNMEkwke0b6lEguq/Wx8Axd WGWIoljZkgYt1N8TGRZaT0VoOwU2Y 73qzcX/vH5qohs/YzJDZVkuShKOT Ixmn+PhkxRYvjUEkwUs7ciMsYKE2 MzKtkQvNWX10nuq5Ve+uVmnU8ji KcAbncAke1KEBTWhBGwgIeIZXeHOU 8+K8Ox/L1oKTz5zCHzifP3LYkB4=< /latexit> <latexit sha1_b ase64="sAKQk4fv24rOzgAwMxhTm9 Wsx54=">AB8HicbVBNSwMxEJ2tX 7V+VT16CRbBg5RdKdRjwUuPFeyHt MuSTbNtaJdkqxYlv4KLx4U8erP8e a/MW3oK0PBh7vzTAzL0w408Z1v53 CxubW9k5xt7S3f3B4VD4+6eg4VYS2 Scxj1QuxpxJ2jbMcNpLFMUi5LQbT m7nfveRKs1ieW+mCfUFHkWMYKNl R6egqx5JYPmLChX3Kq7AFonXk4qkK MVlL8Gw5ikgkpDONa67mJ8TOsDCO czkqDVNMEkwke0b6lEguq/Wx8Axd WGWIoljZkgYt1N8TGRZaT0VoOwU2Y 73qzcX/vH5qohs/YzJDZVkuShKOT Ixmn+PhkxRYvjUEkwUs7ciMsYKE2 MzKtkQvNWX10nuq5Ve+uVmnU8ji KcAbncAke1KEBTWhBGwgIeIZXeHOU 8+K8Ox/L1oKTz5zCHzifP3LYkB4=< /latexit> <latexit sha1_b ase64="sAKQk4fv24rOzgAwMxhTm9 Wsx54=">AB8HicbVBNSwMxEJ2tX 7V+VT16CRbBg5RdKdRjwUuPFeyHt MuSTbNtaJdkqxYlv4KLx4U8erP8e a/MW3oK0PBh7vzTAzL0w408Z1v53 CxubW9k5xt7S3f3B4VD4+6eg4VYS2 Scxj1QuxpxJ2jbMcNpLFMUi5LQbT m7nfveRKs1ieW+mCfUFHkWMYKNl R6egqx5JYPmLChX3Kq7AFonXk4qkK MVlL8Gw5ikgkpDONa67mJ8TOsDCO czkqDVNMEkwke0b6lEguq/Wx8Axd WGWIoljZkgYt1N8TGRZaT0VoOwU2Y 73qzcX/vH5qohs/YzJDZVkuShKOT Ixmn+PhkxRYvjUEkwUs7ciMsYKE2 MzKtkQvNWX10nuq5Ve+uVmnU8ji KcAbncAke1KEBTWhBGwgIeIZXeHOU 8+K8Ox/L1oKTz5zCHzifP3LYkB4=< /latexit> <latexit sha1_b ase64="sAKQk4fv24rOzgAwMxhTm9 Wsx54=">AB8HicbVBNSwMxEJ2tX 7V+VT16CRbBg5RdKdRjwUuPFeyHt MuSTbNtaJdkqxYlv4KLx4U8erP8e a/MW3oK0PBh7vzTAzL0w408Z1v53 CxubW9k5xt7S3f3B4VD4+6eg4VYS2 Scxj1QuxpxJ2jbMcNpLFMUi5LQbT m7nfveRKs1ieW+mCfUFHkWMYKNl R6egqx5JYPmLChX3Kq7AFonXk4qkK MVlL8Gw5ikgkpDONa67mJ8TOsDCO czkqDVNMEkwke0b6lEguq/Wx8Axd WGWIoljZkgYt1N8TGRZaT0VoOwU2Y 73qzcX/vH5qohs/YzJDZVkuShKOT Ixmn+PhkxRYvjUEkwUs7ciMsYKE2 MzKtkQvNWX10nuq5Ve+uVmnU8ji KcAbncAke1KEBTWhBGwgIeIZXeHOU 8+K8Ox/L1oKTz5zCHzifP3LYkB4=< /latexit> … Input self attention layer Decision classfier Rule self attention layer r1 <latexit sha1_base64="Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW/4 PCoenzSNkmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFI KcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18swuglyodIMuWLRVEmCSZk/jcZCs0ZyqklGlhbyVs TDVlaNOp2BC81ZfXSfuq7rl175RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit> <latexit sha1_base64="Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW/4 PCoenzSNkmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFI KcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18swuglyodIMuWLRVEmCSZk/jcZCs0ZyqklGlhbyVs TDVlaNOp2BC81ZfXSfuq7rl175RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit> <latexit sha1_base64="Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW/4 PCoenzSNkmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFI KcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18swuglyodIMuWLRVEmCSZk/jcZCs0ZyqklGlhbyVs TDVlaNOp2BC81ZfXSfuq7rl175RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit> <latexit sha1_base64="Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW/4 PCoenzSNkmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFI KcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18swuglyodIMuWLRVEmCSZk/jcZCs0ZyqklGlhbyVs TDVlaNOp2BC81ZfXSfuq7rl175RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit>r1 <latexit sha1_base64="Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW/4 PCoenzSNkmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFI KcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18swuglyodIMuWLRVEmCSZk/jcZCs0ZyqklGlhbyVs TDVlaNOp2BC81ZfXSfuq7rl175RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit> <latexit sha1_base64="Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW/4 PCoenzSNkmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFI KcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18swuglyodIMuWLRVEmCSZk/jcZCs0ZyqklGlhbyVs TDVlaNOp2BC81ZfXSfuq7rl175RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit> <latexit sha1_base64="Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW/4 PCoenzSNkmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFI KcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18swuglyodIMuWLRVEmCSZk/jcZCs0ZyqklGlhbyVs TDVlaNOp2BC81ZfXSfuq7rl175RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit> <latexit sha1_base64="Qh3uzsyK9GK8FuOJ4Cps4xLYLHs=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW/4 PCoenzSNkmGfdZIhPdDanhUijuo0DJu6nmNA4l74ST27nfeLaiEQ94jTlQUxHSkSCUbTSgx54g2rNrbsLkHXiFaQGBVqD6ld/mLAs5gqZpMb0PDfFI KcaBZN8VulnhqeUTeiI9yxVNOYmyBenzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18swuglyodIMuWLRVEmCSZk/jcZCs0ZyqklGlhbyVs TDVlaNOp2BC81ZfXSfuq7rl175RazaKOMpwBudwCR5cQxPuoAU+MBjBM7zCmyOdF+fd+Vi2lpxi5hT+wPn8Af5HjYw=</latexit> r2 <latexit sha1_base64="I7VMZpTwCBMvwPRJRU9Q64c8e8M=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XfUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/ Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxeyth E6oMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD/LjY0=</latexit> <latexit sha1_base64="I7VMZpTwCBMvwPRJRU9Q64c8e8M=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XfUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/ Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxeyth E6oMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD/LjY0=</latexit> <latexit sha1_base64="I7VMZpTwCBMvwPRJRU9Q64c8e8M=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XfUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/ Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxeyth E6oMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD/LjY0=</latexit> <latexit sha1_base64="I7VMZpTwCBMvwPRJRU9Q64c8e8M=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XfUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/ Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxeyth E6oMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD/LjY0=</latexit>r2 <latexit sha1_base64="I7VMZpTwCBMvwPRJRU9Q64c8e8M=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XfUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/ Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxeyth E6oMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD/LjY0=</latexit> <latexit sha1_base64="I7VMZpTwCBMvwPRJRU9Q64c8e8M=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XfUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/ Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxeyth E6oMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD/LjY0=</latexit> <latexit sha1_base64="I7VMZpTwCBMvwPRJRU9Q64c8e8M=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XfUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/ Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxeyth E6oMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD/LjY0=</latexit> <latexit sha1_base64="I7VMZpTwCBMvwPRJRU9Q64c8e8M=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lKoR4LXjxWtB/QhrLZTtqlm03Y3Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbT24XfUKleSwfzSxBP6JjyUPOqLHSgxrWhuWKW3WXIJvEy0kFcrSG5a/BKGZphNIwQbXue25i/ Iwqw5nAeWmQakwom9Ix9i2VNELtZ8tT5+TKiMSxsqWNGSp/p7IaKT1LApsZ0TNRK97C/E/r5+a8MbPuExSg5KtFoWpICYmi7/JiCtkRswsoUxeyth E6oMzadkg3BW395k3RqVc+tevf1SrOex1GEC7iEa/CgAU24gxa0gcEYnuEV3hzhvDjvzseqteDkM+fwB87nD/LjY0=</latexit> rnR <latexit sha1_base64="L48TVjMPthfolF+cr4aAaWL6tA=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc 1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrb WwkbU0WZsQlVbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit> <latexit sha1_base64="L48TVjMPthfolF+cr4aAaWL6tA=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc 1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrb WwkbU0WZsQlVbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit> <latexit sha1_base64="L48TVjMPthfolF+cr4aAaWL6tA=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc 1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrb WwkbU0WZsQlVbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit> <latexit sha1_base64="L48TVjMPthfolF+cr4aAaWL6tA=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc 1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrb WwkbU0WZsQlVbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit>rnR <latexit sha1_base64="L48TVjMPthfolF+cr4aAaWL6tA=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc 1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrb WwkbU0WZsQlVbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit> <latexit sha1_base64="L48TVjMPthfolF+cr4aAaWL6tA=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc 1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrb WwkbU0WZsQlVbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit> <latexit sha1_base64="L48TVjMPthfolF+cr4aAaWL6tA=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc 1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrb WwkbU0WZsQlVbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit> <latexit sha1_base64="L48TVjMPthfolF+cr4aAaWL6tA=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF49V7Ae0IWy2k3bpZhN2N0IJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6lGwSW2DTcCe6lCGocCu+Hkdu53n1BpnshHM03Rj+lI8ogzaqzUVUEug4dZUK25dXcBsk68gtSgQCuofg2GCctilIYJqnXfc 1Pj51QZzgTOKoNMY0rZhI6wb6mkMWo/X5w7IxdWGZIoUbakIQv190ROY62ncWg7Y2rGetWbi/95/cxEN37OZoZlGy5KMoEMQmZ/06GXCEzYmoJZYrb WwkbU0WZsQlVbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MJjAM7zCm5M6L86787FsLTnFzCn8gfP5A3pRj5o=</latexit> zyes <latexit sha1_base64="MewD0k4ZtvhTJqLV3S7p7CGJsnQ=">AB9X icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2lo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t 39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRBJBp SEca93Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxP LcFEMZsVkQlWmBhbVMW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w=</latexit> <latexit sha1_base64="MewD0k4ZtvhTJqLV3S7p7CGJsnQ=">AB9X icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2lo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t 39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRBJBp SEca93Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxP LcFEMZsVkQlWmBhbVMW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w=</latexit> <latexit sha1_base64="MewD0k4ZtvhTJqLV3S7p7CGJsnQ=">AB9X icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2lo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t 39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRBJBp SEca93Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxP LcFEMZsVkQlWmBhbVMW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w=</latexit> <latexit sha1_base64="MewD0k4ZtvhTJqLV3S7p7CGJsnQ=">AB9X icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2lo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t 39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRBJBp SEca93Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxP LcFEMZsVkQlWmBhbVMW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w=</latexit>zyes <latexit sha1_base64="MewD0k4ZtvhTJqLV3S7p7CGJsnQ=">AB9X icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2lo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t 39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRBJBp SEca93Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxP LcFEMZsVkQlWmBhbVMW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w=</latexit> <latexit sha1_base64="MewD0k4ZtvhTJqLV3S7p7CGJsnQ=">AB9X icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2lo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t 39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRBJBp SEca93Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxP LcFEMZsVkQlWmBhbVMW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w=</latexit> <latexit sha1_base64="MewD0k4ZtvhTJqLV3S7p7CGJsnQ=">AB9X icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2lo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t 39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRBJBp SEca93Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxP LcFEMZsVkQlWmBhbVMW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w=</latexit> <latexit sha1_base64="MewD0k4ZtvhTJqLV3S7p7CGJsnQ=">AB9X icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Y8mkmTY0yQxJRhmH/ocbF4q49V/c+Tdm2lo64HA4Zx7uScniDnTxnW/ndLa+sbmVnm7srO7t 39QPTzq6ChRhLZJxCPVC7CmnEnaNsxw2osVxSLgtBtMr3O/+0CVZpG8M2lMfYHkoWMYGOl+6fhQGAzUSJLqZ4NqzW37s6BVolXkBoUaA2rX4NRBJBp SEca93Nj4GVaGEU5nlUGiaYzJFI9p31KJBdV+Nk89Q2dWGaEwUvZJg+bq740MC61TEdjJPKNe9nLxP6+fmPDKz5iME0MlWRwKE45MhPIK0IgpSgxP LcFEMZsVkQlWmBhbVMW4C1/eZV0LuqeW/duG7Vmo6ijDCdwCufgwSU04QZa0AYCp7hFd6cR+fFeXc+FqMlp9g5hj9wPn8AVbKS/w=</latexit> zno <latexit sha1_base64="23k5hj9mv0vpUcXAY3RSMY5bcNE=">AB9H icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HgczrmXe3KimDNjf/bK2xsbm3vFHdLe/sHh 0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlH BvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYI Jpq5rIiMscbEup5KroRg9cvrpHVDfxqcF+r1Gt5HU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit> <latexit sha1_base64="23k5hj9mv0vpUcXAY3RSMY5bcNE=">AB9H icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HgczrmXe3KimDNjf/bK2xsbm3vFHdLe/sHh 0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlH BvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYI Jpq5rIiMscbEup5KroRg9cvrpHVDfxqcF+r1Gt5HU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit> <latexit sha1_base64="23k5hj9mv0vpUcXAY3RSMY5bcNE=">AB9H icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HgczrmXe3KimDNjf/bK2xsbm3vFHdLe/sHh 0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlH BvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYI Jpq5rIiMscbEup5KroRg9cvrpHVDfxqcF+r1Gt5HU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit> <latexit sha1_base64="23k5hj9mv0vpUcXAY3RSMY5bcNE=">AB9H icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HgczrmXe3KimDNjf/bK2xsbm3vFHdLe/sHh 0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlH BvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYI Jpq5rIiMscbEup5KroRg9cvrpHVDfxqcF+r1Gt5HU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit>zno <latexit sha1_base64="23k5hj9mv0vpUcXAY3RSMY5bcNE=">AB9H icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HgczrmXe3KimDNjf/bK2xsbm3vFHdLe/sHh 0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlH BvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYI Jpq5rIiMscbEup5KroRg9cvrpHVDfxqcF+r1Gt5HU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit> <latexit sha1_base64="23k5hj9mv0vpUcXAY3RSMY5bcNE=">AB9H icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HgczrmXe3KimDNjf/bK2xsbm3vFHdLe/sHh 0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlH BvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYI Jpq5rIiMscbEup5KroRg9cvrpHVDfxqcF+r1Gt5HU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit> <latexit sha1_base64="23k5hj9mv0vpUcXAY3RSMY5bcNE=">AB9H icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HgczrmXe3KimDNjf/bK2xsbm3vFHdLe/sHh 0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlH BvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYI Jpq5rIiMscbEup5KroRg9cvrpHVDfxqcF+r1Gt5HU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit> <latexit sha1_base64="23k5hj9mv0vpUcXAY3RSMY5bcNE=">AB9H icbVDLSgMxFL1TX7W+qi7dBIvgqsxIQZcFNy4r2Ae0Q8mkaRuax5hkCnXod7hxoYhbP8adf2OmnYW2HgczrmXe3KimDNjf/bK2xsbm3vFHdLe/sHh 0fl45OWUYkmtEkUV7oTYUM5k7RpmeW0E2uKRcRpO5rcZn57SrVhSj7YWUxDgUeSDRnB1knhU78nsB1rkUo175crftVfAK2TICcVyNHol796A0USQaUlH BvTDfzYhinWlhFO56VeYmiMyQSPaNdRiQU1YboIPUcXThmgodLuSYsW6u+NFAtjZiJyk1lEs+pl4n9eN7HDmzBlMk4slWR5aJhwZBXKGkADpimxfOYI Jpq5rIiMscbEup5KroRg9cvrpHVDfxqcF+r1Gt5HU4g3O4hACuoQ530IAmEHiEZ3iFN2/qvXjv3sdytODlO6fwB97nD3tBkoE=</latexit> zirrelevant <latexit sha1_base64="cmXPbPA3k7RAXZcMXiBD84brFe4=">AB/n icbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3 b39A/fwqK3jVFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNIBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr 14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F 7jMF1PCpJYQqZrNiOiKUGMbq9gS/OUvr5L2Zc3av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRl6Rq/ozXlyXpx352MxWnKnWP0B87nD2/lmE=</l atexit> <latexit sha1_base64="cmXPbPA3k7RAXZcMXiBD84brFe4=">AB/n icbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3 b39A/fwqK3jVFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNIBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr 14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F 7jMF1PCpJYQqZrNiOiKUGMbq9gS/OUvr5L2Zc3av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRl6Rq/ozXlyXpx352MxWnKnWP0B87nD2/lmE=</l atexit> <latexit sha1_base64="cmXPbPA3k7RAXZcMXiBD84brFe4=">AB/n icbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3 b39A/fwqK3jVFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNIBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr 14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F 7jMF1PCpJYQqZrNiOiKUGMbq9gS/OUvr5L2Zc3av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRl6Rq/ozXlyXpx352MxWnKnWP0B87nD2/lmE=</l atexit> <latexit sha1_base64="cmXPbPA3k7RAXZcMXiBD84brFe4=">AB/n icbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3 b39A/fwqK3jVFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNIBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr 14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F 7jMF1PCpJYQqZrNiOiKUGMbq9gS/OUvr5L2Zc3av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRl6Rq/ozXlyXpx352MxWnKnWP0B87nD2/lmE=</l atexit>zirrelevant <latexit sha1_base64="cmXPbPA3k7RAXZcMXiBD84brFe4=">AB/n icbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3 b39A/fwqK3jVFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNIBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr 14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F 7jMF1PCpJYQqZrNiOiKUGMbq9gS/OUvr5L2Zc3av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRl6Rq/ozXlyXpx352MxWnKnWP0B87nD2/lmE=</l atexit> <latexit sha1_base64="cmXPbPA3k7RAXZcMXiBD84brFe4=">AB/n icbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3 b39A/fwqK3jVFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNIBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr 14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F 7jMF1PCpJYQqZrNiOiKUGMbq9gS/OUvr5L2Zc3av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRl6Rq/ozXlyXpx352MxWnKnWP0B87nD2/lmE=</l atexit> <latexit sha1_base64="cmXPbPA3k7RAXZcMXiBD84brFe4=">AB/n icbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3 b39A/fwqK3jVFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNIBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr 14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F 7jMF1PCpJYQqZrNiOiKUGMbq9gS/OUvr5L2Zc3av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRl6Rq/ozXlyXpx352MxWnKnWP0B87nD2/lmE=</l atexit> <latexit sha1_base64="cmXPbPA3k7RAXZcMXiBD84brFe4=">AB/n icbVBNSwMxFMzWr1q/VsWTl2ARPJVdKeix4MVjBdsK7bJk09c2NMkuSbZQl4J/xYsHRbz6O7z5b8y2e9DWgcAw8x5vMlHCmTae9+2U1tY3NrfK25Wd3 b39A/fwqK3jVFo0ZjH6iEiGjiT0DLMcHhIFBARcehE45vc70xAaRbLezNIBkKNmAUWKsFLonj2FPEDNSImNKAYcJkWYWulWv5s2BV4lfkCoq0Azdr 14/pqkAaSgnWnd9LzFBRpRhlMOs0ks1JISOyRC6lkoiQAfZP4Mn1uljwexsk8aPFd/b2REaD0VkZ3Mo+plLxf/87qpGVwHGZNJakDSxaFByrGJcd4F 7jMF1PCpJYQqZrNiOiKUGMbq9gS/OUvr5L2Zc3av5dvdqoF3WU0Sk6QxfIR1eogW5RE7UQRl6Rq/ozXlyXpx352MxWnKnWP0B87nD2/lmE=</l atexit> … x <latexit sha1_base64="BJzBhsL wXSTB5Lw6Dgv9f7gkUY=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w 8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW 4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15O KpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bH DojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLk oTAUxMZl/TYZcITNiaglitbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqt TyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4g OM7g=</latexit> <latexit sha1_base64="BJzBhsL wXSTB5Lw6Dgv9f7gkUY=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w 8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW 4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15O KpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bH DojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLk oTAUxMZl/TYZcITNiaglitbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqt TyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4g OM7g=</latexit> <latexit sha1_base64="BJzBhsL wXSTB5Lw6Dgv9f7gkUY=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w 8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW 4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15O KpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bH DojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLk oTAUxMZl/TYZcITNiaglitbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqt TyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4g OM7g=</latexit> <latexit sha1_base64="BJzBhsL wXSTB5Lw6Dgv9f7gkUY=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48t2A9oQ9lsJ+3azSbsbsQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w 8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW 4EdhOFNAoEdoLJ7dzvPKLSPJb3ZpqgH9GR5CFn1Fip+TQoV9yquwBZJ15O KpCjMSh/9YcxSyOUhgmqdc9zE+NnVBnOBM5K/VRjQtmEjrBnqaQRaj9bH DojF1YZkjBWtqQhC/X3REYjradRYDsjasZ61ZuL/3m91IQ3fsZlkhqUbLk oTAUxMZl/TYZcITNiaglitbCRtTRZmx2ZRsCN7qy+ukfVX13KrXrFXqt TyOIpzBOVyCB9dQhztoQAsYIDzDK7w5D86L8+58LFsLTj5zCn/gfP4A4g OM7g=</latexit> U <latexit sha1_base64="i+1/OIq J7WOlfZ3Tw4+VUkqns5I=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w 8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMfRZIhL1EFKNgkv0DT cCH1KFNA4FdsPJ7dzvPqHSPJH3ZpiENOR5BFn1Fip7Q+qNbfuLkDWiVeQ GhRoDapf/WHCshilYJq3fPc1AQ5VYzgbNKP9OYUjahI+xZKmMOsgXh 87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZgYlWy6 KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUWZsNhUbgrf68jrpXNU9t+61G7Vmo 4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94 zL</latexit> <latexit sha1_base64="i+1/OIq J7WOlfZ3Tw4+VUkqns5I=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w 8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMfRZIhL1EFKNgkv0DT cCH1KFNA4FdsPJ7dzvPqHSPJH3ZpiENOR5BFn1Fip7Q+qNbfuLkDWiVeQ GhRoDapf/WHCshilYJq3fPc1AQ5VYzgbNKP9OYUjahI+xZKmMOsgXh 87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZgYlWy6 KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUWZsNhUbgrf68jrpXNU9t+61G7Vmo 4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94 zL</latexit> <latexit sha1_base64="i+1/OIq J7WOlfZ3Tw4+VUkqns5I=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w 8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMfRZIhL1EFKNgkv0DT cCH1KFNA4FdsPJ7dzvPqHSPJH3ZpiENOR5BFn1Fip7Q+qNbfuLkDWiVeQ GhRoDapf/WHCshilYJq3fPc1AQ5VYzgbNKP9OYUjahI+xZKmMOsgXh 87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZgYlWy6 KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUWZsNhUbgrf68jrpXNU9t+61G7Vmo 4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94 zL</latexit> <latexit sha1_base64="i+1/OIq J7WOlfZ3Tw4+VUkqns5I=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48tmFZoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w 8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fiko5NMfRZIhL1EFKNgkv0DT cCH1KFNA4FdsPJ7dzvPqHSPJH3ZpiENOR5BFn1Fip7Q+qNbfuLkDWiVeQ GhRoDapf/WHCshilYJq3fPc1AQ5VYzgbNKP9OYUjahI+xZKmMOsgXh 87IhVWGJEqULWnIQv09kdNY62kc2s6YmrFe9ebif14vM9FNkHOZgYlWy6 KMkFMQuZfkyFXyIyYWkKZ4vZWwsZUWZsNhUbgrf68jrpXNU9t+61G7Vmo 4ijDGdwDpfgwTU04Q5a4AMDhGd4hTfn0Xlx3p2PZWvJKWZO4Q+czx+s94 zL</latexit> R1 <latexit sha1_base64="vzdjoaD zv9nOg/w28vuA3tuVlE=">AB63icbVDLSgNBEOyNrxhfUY9eBoPgKex KQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKq6 e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2Z bTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6C gtSgQGtY/RqMFEkFlZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2 eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJ cFKcWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI6VzVA78e3Ddqz UYRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/AC PgjZY=</latexit> <latexit sha1_base64="vzdjoaD zv9nOg/w28vuA3tuVlE=">AB63icbVDLSgNBEOyNrxhfUY9eBoPgKex KQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKq6 e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2Z bTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6C gtSgQGtY/RqMFEkFlZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2 eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJ cFKcWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI6VzVA78e3Ddqz UYRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/AC PgjZY=</latexit> <latexit sha1_base64="vzdjoaD zv9nOg/w28vuA3tuVlE=">AB63icbVDLSgNBEOyNrxhfUY9eBoPgKex KQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKq6 e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2Z bTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6C gtSgQGtY/RqMFEkFlZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2 eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJ cFKcWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI6VzVA78e3Ddqz UYRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/AC PgjZY=</latexit> <latexit sha1_base64="vzdjoaD zv9nOg/w28vuA3tuVlE=">AB63icbVDLSgNBEOyNrxhfUY9eBoPgKex KQI8BLx6jmAckS5idzCZD5rHMzAphyS948aCIV3/Im3/jbLIHTSxoKq6 e6KEs6M9f1vr7SxubW9U96t7O0fHB5Vj086RqWa0DZRXOlehA3lTNK2Z bTXqIpFhGn3Wh6m/vdJ6oNU/LRzhIaCjyWLGYE21x6GAZoWK35dX8BtE6C gtSgQGtY/RqMFEkFlZwbEw/8BMbZlhbRjidVwapoQkmUzymfUclFtSE2 eLWObpwygjFSruSFi3U3xMZFsbMROQ6BbYTs+rl4n9eP7XxTZgxmaSWSrJ cFKcWYXyx9GIaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI6VzVA78e3Ddqz UYRxnO4BwuIYBraMIdtKANBCbwDK/w5gnvxXv3PpatJa+YOYU/8D5/AC PgjZY=</latexit> RnR <latexit sha1_base64="9QkYS9H YIiCV3i5Jm1pK7jwtw=">AB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0l E0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bY WZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTjLdZIhPdC6nhUijeRo GS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cX IOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VN ObGzxfnzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18wuvVzodI MuWLRVEmCSZk/jsZCs0ZyqklGlhbyVsTDVlaBOq2BC81ZfXSeq7rl17 +G61rgu4ijDGZzDJXhwAw24hya0gcEnuEV3pzUeXHenY9la8kpZk7hD5 zPH0kRj3o=</latexit> <latexit sha1_base64="9QkYS9H YIiCV3i5Jm1pK7jwtw=">AB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0l E0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bY WZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTjLdZIhPdC6nhUijeRo GS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cX IOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VN ObGzxfnzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18wuvVzodI MuWLRVEmCSZk/jsZCs0ZyqklGlhbyVsTDVlaBOq2BC81ZfXSeq7rl17 +G61rgu4ijDGZzDJXhwAw24hya0gcEnuEV3pzUeXHenY9la8kpZk7hD5 zPH0kRj3o=</latexit> <latexit sha1_base64="9QkYS9H YIiCV3i5Jm1pK7jwtw=">AB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0l E0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bY WZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTjLdZIhPdC6nhUijeRo GS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cX IOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VN ObGzxfnzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18wuvVzodI MuWLRVEmCSZk/jsZCs0ZyqklGlhbyVsTDVlaBOq2BC81ZfXSeq7rl17 +G61rgu4ijDGZzDJXhwAw24hya0gcEnuEV3pzUeXHenY9la8kpZk7hD5 zPH0kRj3o=</latexit> <latexit sha1_base64="9QkYS9H YIiCV3i5Jm1pK7jwtw=">AB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0l E0GPBi8da7Ae0IWy2m3bpZhN2J0IJ/RFePCji1d/jzX/jts1BWx8MPN6bY WZemEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTjLdZIhPdC6nhUijeRo GS91LNaRxK3g0nd3O/+8S1EYl6xGnK/ZiOlIgEo2ilbivIVdCaBdWaW3cX IOvEK0gNCjSD6tdgmLAs5gqZpMb0PTdFP6caBZN8VhlkhqeUTeiI9y1VN ObGzxfnzsiFVYkSrQthWSh/p7IaWzMNA5tZ0xbFa9ufif18wuvVzodI MuWLRVEmCSZk/jsZCs0ZyqklGlhbyVsTDVlaBOq2BC81ZfXSeq7rl17 +G61rgu4ijDGZzDJXhwAw24hya0gcEnuEV3pzUeXHenY9la8kpZk7hD5 zPH0kRj3o=</latexit> · · · <latexit sha1_base64="gNYy28 tHbW2zILQMm4k1oYvY8=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bY WZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxtMSaV7ATVcioS3Ua DkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGy TryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3 Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiF P2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZU4Y2oIoNwVt9eZ10ruqeW/fuG 7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn 8AqiGPIQ=</latexit> <latexit sha1_base64="gNYy28 tHbW2zILQMm4k1oYvY8=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bY WZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxtMSaV7ATVcioS3Ua DkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGy TryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3 Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiF P2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZU4Y2oIoNwVt9eZ10ruqeW/fuG 7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn 8AqiGPIQ=</latexit> <latexit sha1_base64="gNYy28 tHbW2zILQMm4k1oYvY8=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bY WZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxtMSaV7ATVcioS3Ua DkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGy TryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3 Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiF P2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZU4Y2oIoNwVt9eZ10ruqeW/fuG 7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn 8AqiGPIQ=</latexit> <latexit sha1_base64="gNYy28 tHbW2zILQMm4k1oYvY8=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0m koMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bY WZekEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pGJVpxtMSaV7ATVcioS3Ua DkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGy TryC1KBAa1j9GoSKZTFPkElqTN9zU/RzqlEwyWeVQWZ4StmEjnjf0oTG3 Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiF P2HJRlEmCisxfJ6HQnKGcWkKZFvZWwsZU4Y2oIoNwVt9eZ10ruqeW/fuG 7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn 8AqiGPIQ=</latexit> Rule scorer A1 <latexit sha1_base64="Kbzmn1Oe6Ur0QF1ftChpfZF1ug=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+w eFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+ BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbC xlRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit> <latexit sha1_base64="Kbzmn1Oe6Ur0QF1ftChpfZF1ug=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+w eFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+ BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbC xlRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit> <latexit sha1_base64="Kbzmn1Oe6Ur0QF1ftChpfZF1ug=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+w eFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+ BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbC xlRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit> <latexit sha1_base64="Kbzmn1Oe6Ur0QF1ftChpfZF1ug=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8VLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+w eFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1Fjp4WbgDcoVt+ouQNaJl5MK5GgOyl/9YczSCKVhgmrd89zE+ BlVhjOBs1I/1ZhQNqEj7FkqaYTazxanzsiFVYkjJUtachC/T2R0UjraRTYzoiasV715uJ/Xi814bWfcZmkBiVbLgpTQUxM5n+TIVfIjJhaQpni9lbC xlRZmw6JRuCt/ryOmlfVT236t3XKo1aHkcRzuAcLsGDOjTgDprQAgYjeIZXeHOE8+K8Ox/L1oKTz5zCHzifP7OhjVs=</latexit> AnR <latexit sha1_base64="H9wlhGkXhTHP9mp8ey4PKoIiZyM=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbTbt0swm7E6GE/gvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2 z+oHh61TZJpxlskYnuhtRwKRvoUDJu6nmNA4l74Tj25nfeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3pe W6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC 3krYiGrK0CZUsSF4y+vkvZF3XPr3v1lrXFZxFGEziFc/DgChpwB01oAYMxPMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit> <latexit sha1_base64="H9wlhGkXhTHP9mp8ey4PKoIiZyM=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbTbt0swm7E6GE/gvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2 z+oHh61TZJpxlskYnuhtRwKRvoUDJu6nmNA4l74Tj25nfeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3pe W6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC 3krYiGrK0CZUsSF4y+vkvZF3XPr3v1lrXFZxFGEziFc/DgChpwB01oAYMxPMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit> <latexit sha1_base64="H9wlhGkXhTHP9mp8ey4PKoIiZyM=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbTbt0swm7E6GE/gvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2 z+oHh61TZJpxlskYnuhtRwKRvoUDJu6nmNA4l74Tj25nfeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3pe W6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC 3krYiGrK0CZUsSF4y+vkvZF3XPr3v1lrXFZxFGEziFc/DgChpwB01oAYMxPMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit> <latexit sha1_base64="H9wlhGkXhTHP9mp8ey4PKoIiZyM=">AB7n icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPFi8cq9gPaEDbTbt0swm7E6GE/gvHhTx6u/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmltfWNzq7xd2dnd2 z+oHh61TZJpxlskYnuhtRwKRvoUDJu6nmNA4l74Tj25nfeLaiEQ94iTlfkyHSkSCUbRS5ybIVfAwDao1t+7OQVaJV5AaFGgG1a/+IGFZzBUySY3pe W6Kfk41Cib5tNLPDE8pG9Mh71mqaMyNn8/PnZIzqwxIlGhbCslc/T2R09iYSRzazpjiyCx7M/E/r5dhdO3nQqUZcsUWi6JMEkzI7HcyEJozlBNLKNPC 3krYiGrK0CZUsSF4y+vkvZF3XPr3v1lrXFZxFGEziFc/DgChpwB01oAYMxPMrvDmp8+K8Ox+L1pJTzBzDHzifPy7nj2k=</latexit> · · · <latexit sha1_base64="gNYy28tHbW2zILQMm4k1oYvY8=">AB7X icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/c HhUPT7pGJVpxtMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU /RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZW wsZU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ=</latexit> <latexit sha1_base64="gNYy28tHbW2zILQMm4k1oYvY8=">AB7X icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/c HhUPT7pGJVpxtMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU /RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZW wsZU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ=</latexit> <latexit sha1_base64="gNYy28tHbW2zILQMm4k1oYvY8=">AB7X icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/c HhUPT7pGJVpxtMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU /RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZW wsZU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ=</latexit> <latexit sha1_base64="gNYy28tHbW2zILQMm4k1oYvY8=">AB7X icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Ae0oWw2m3btJht2J0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ndLG5tb2Tnm3srd/c HhUPT7pGJVpxtMSaV7ATVcioS3UaDkvVRzGgeSd4PJ7dzvPnFthEoecJpyP6ajRESCUbRSZ8BChWZYrbl1dwGyTryC1KBAa1j9GoSKZTFPkElqTN9zU /RzqlEwyWeVQWZ4StmEjnjf0oTG3Pj54toZubBKSCKlbSVIFurviZzGxkzjwHbGFMdm1ZuL/3n9DKMbPxdJmiFP2HJRlEmCisxfJ6HQnKGcWkKZFvZW wsZU4Y2oIoNwVt9eZ10ruqeW/fuG7Vmo4ijDGdwDpfgwTU04Q5a0AYGj/AMr/DmKOfFeXc+lq0lp5g5hT9wPn8AqiGPIQ=</latexit> C <latexit sha1_base64="y5YGHW4NRn032l4c2SASqYAwvmQ=">AB6H icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwTVu+5ifEzq gxnAuelQaoxoWxKx9i3VNItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2o oszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz0oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ=</latexit> <latexit sha1_base64="y5YGHW4NRn032l4c2SASqYAwvmQ=">AB6H icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwTVu+5ifEzq gxnAuelQaoxoWxKx9i3VNItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2o oszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz0oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ=</latexit> <latexit sha1_base64="y5YGHW4NRn032l4c2SASqYAwvmQ=">AB6H icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwTVu+5ifEzq gxnAuelQaoxoWxKx9i3VNItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2o oszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz0oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ=</latexit> <latexit sha1_base64="y5YGHW4NRn032l4c2SASqYAwvmQ=">AB6H icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMdCLx5bsB/QhrLZTtq1m03Y3Qgl9Bd48aCIV3+SN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RS2tnd294r7pYPDo +OT8ulZR8epYthmsYhVL6AaBZfYNtwI7CUKaRQI7AbTxsLvPqHSPJYPZpagH9Gx5CFn1Fip1RiWK27VXYJsEi8nFcjRHJa/BqOYpRFKwTVu+5ifEzq gxnAuelQaoxoWxKx9i3VNItZ8tD52TK6uMSBgrW9KQpfp7IqOR1rMosJ0RNRO97i3E/7x+asI7P+MySQ1KtloUpoKYmCy+JiOukBkxs4Qyxe2thE2o oszYbEo2BG/95U3Sual6btVr1Sr1Wh5HES7gEq7Bg1uowz0oQ0MEJ7hFd6cR+fFeXc+Vq0FJ585hz9wPn8Aka+MuQ=</latexit> Extraction Module Entailment scorer gi <latexit sha1_base64="8v5kiaA9t7Fx9mcIlNKMYFYePVo=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz 6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkb U0WZselUbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit> <latexit sha1_base64="8v5kiaA9t7Fx9mcIlNKMYFYePVo=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz 6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkb U0WZselUbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit> <latexit sha1_base64="8v5kiaA9t7Fx9mcIlNKMYFYePVo=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz 6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkb U0WZselUbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit> <latexit sha1_base64="8v5kiaA9t7Fx9mcIlNKMYFYePVo=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48V7Qe0oWy2k3TpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O6WNza3tnfJuZW/4 PCoenzS0UmGLZIhLVC6hGwSW2DTcCe6lCGgcCu8Hkdu53n1BpnshHM03Rj2kecgZNVZ6iIZ8WK25dXcBsk68gtSgQGtY/RqMEpbFKA0TVOu+56bGz 6kynAmcVQaZxpSyCY2wb6mkMWo/X5w6IxdWGZEwUbakIQv190ROY62ncWA7Y2rGetWbi/95/cyEN37OZoZlGy5KMwEMQmZ/01GXCEzYmoJZYrbWwkb U0WZselUbAje6svrpHNV9y6d9+oNRtFHGU4g3O4BA+uoQl30I2MIjgGV7hzRHOi/PufCxbS04xcwp/4Hz+AEJ0jbk=</latexit> hi <latexit sha1_base64="EtR5b7t+XdzbNQGDeG4n6cxuEuQ=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+w eFR9fikrZNMfRZIhLVDalGwSX6huB3VQhjUOBnXByO/c7T6g0T+SjmaYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01Nk FNlOBM4q/QzjSlEzrCnqWSxqiDfHqjFxYZUiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3Q5l2lmULloigTxCRk/jcZcoXMiKklClubyVs TBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQt8YDCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit> <latexit sha1_base64="EtR5b7t+XdzbNQGDeG4n6cxuEuQ=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+w eFR9fikrZNMfRZIhLVDalGwSX6huB3VQhjUOBnXByO/c7T6g0T+SjmaYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01Nk FNlOBM4q/QzjSlEzrCnqWSxqiDfHqjFxYZUiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3Q5l2lmULloigTxCRk/jcZcoXMiKklClubyVs TBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQt8YDCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit> <latexit sha1_base64="EtR5b7t+XdzbNQGDeG4n6cxuEuQ=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+w eFR9fikrZNMfRZIhLVDalGwSX6huB3VQhjUOBnXByO/c7T6g0T+SjmaYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01Nk FNlOBM4q/QzjSlEzrCnqWSxqiDfHqjFxYZUiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3Q5l2lmULloigTxCRk/jcZcoXMiKklClubyVs TBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQt8YDCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit> <latexit sha1_base64="EtR5b7t+XdzbNQGDeG4n6cxuEuQ=">AB6n icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCF48VTVtoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+w eFR9fikrZNMfRZIhLVDalGwSX6huB3VQhjUOBnXByO/c7T6g0T+SjmaYxHQkecQZNVZ6GA/4oFpz6+4CZJ14BalBgdag+tUfJiyLURomqNY9z01Nk FNlOBM4q/QzjSlEzrCnqWSxqiDfHqjFxYZUiRNmShizU3xM5jbWexqHtjKkZ61VvLv7n9TIT3Q5l2lmULloigTxCRk/jcZcoXMiKklClubyVs TBVlxqZTsSF4qy+vk/ZV3XPr3n2j1mwUcZThDM7hEjy4hibcQt8YDCZ3iFN0c4L86787FsLTnFzCn8gfP5A0P6jbo=</latexit> Entailment Module Decision Module zinquire <latexit sha1_base64="UMvom97kWv8j4eUEwQDxge3lRDI=">AB+3 icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc8JE0aVdt1va219Y3Nru7JT3d3bP zi0j2pdFacSkw6OWSz7IVKEUE6mpG+okiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOY pxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyR yYo1mxmCsKTmVgdPkERYm7qpgRvOfIq6V40PLfh3TbrWZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQ=</latex it> <latexit sha1_base64="UMvom97kWv8j4eUEwQDxge3lRDI=">AB+3 icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc8JE0aVdt1va219Y3Nru7JT3d3bP zi0j2pdFacSkw6OWSz7IVKEUE6mpG+okiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOY pxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyR yYo1mxmCsKTmVgdPkERYm7qpgRvOfIq6V40PLfh3TbrWZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQ=</latex it> <latexit sha1_base64="UMvom97kWv8j4eUEwQDxge3lRDI=">AB+3 icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc8JE0aVdt1va219Y3Nru7JT3d3bP zi0j2pdFacSkw6OWSz7IVKEUE6mpG+okiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOY pxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyR yYo1mxmCsKTmVgdPkERYm7qpgRvOfIq6V40PLfh3TbrWZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQ=</latex it> <latexit sha1_base64="UMvom97kWv8j4eUEwQDxge3lRDI=">AB+3 icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc8JE0aVdt1va219Y3Nru7JT3d3bP zi0j2pdFacSkw6OWSz7IVKEUE6mpG+okiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOY pxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyR yYo1mxmCsKTmVgdPkERYm7qpgRvOfIq6V40PLfh3TbrWZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQ=</latex it>zinquire <latexit sha1_base64="UMvom97kWv8j4eUEwQDxge3lRDI=">AB+3 icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc8JE0aVdt1va219Y3Nru7JT3d3bP zi0j2pdFacSkw6OWSz7IVKEUE6mpG+okiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOY pxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyR yYo1mxmCsKTmVgdPkERYm7qpgRvOfIq6V40PLfh3TbrWZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQ=</latex it> <latexit sha1_base64="UMvom97kWv8j4eUEwQDxge3lRDI=">AB+3 icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc8JE0aVdt1va219Y3Nru7JT3d3bP zi0j2pdFacSkw6OWSz7IVKEUE6mpG+okiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOY pxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyR yYo1mxmCsKTmVgdPkERYm7qpgRvOfIq6V40PLfh3TbrWZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQ=</latex it> <latexit sha1_base64="UMvom97kWv8j4eUEwQDxge3lRDI=">AB+3 icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc8JE0aVdt1va219Y3Nru7JT3d3bP zi0j2pdFacSkw6OWSz7IVKEUE6mpG+okiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOY pxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyR yYo1mxmCsKTmVgdPkERYm7qpgRvOfIq6V40PLfh3TbrWZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQ=</latex it> <latexit sha1_base64="UMvom97kWv8j4eUEwQDxge3lRDI=">AB+3 icbVDLSsNAFL3xWesr1qWbYBFclUQKuiy4cVnBPqANYTKdtENnJnFmItaQX3HjQhG3/og7/8ZJm4W2Hhg4nHMvc8JE0aVdt1va219Y3Nru7JT3d3bP zi0j2pdFacSkw6OWSz7IVKEUE6mpG+okiIeM9MLpdeH3HohUNBZ3epYQn6OxoBHFSBspsGtPwZAjPZE8o+I+pZLkgV13G+4czirxSlKHEu3A/hqOY pxyIjRmSKmB5ybaz5DUFDOSV4epIgnCUzQmA0MF4kT52fz23DkzysiJYme0M5c/b2RIa7UjIdmsrhTLXuF+J83SHV05ZtQSaqJwIuPopQ5OnaKIpyR yYo1mxmCsKTmVgdPkERYm7qpgRvOfIq6V40PLfh3TbrWZRwVO4BTOwYNLaMENtKEDGB7hGV7hzcqtF+vd+liMrlnlzjH8gfX5AwTwlQ=</latex it> Figure 2: The Entailment-driven Extract and Edit network. rules. The user presents a scenario describing their situation, and asks the system an underspecified question. In order to answer the user’s question, the system must ask the user a series of follow-up questions to determine whether the user satisfies the set of decision rules. The key challenges in CMR are to identify implicit rules present in the document, understand which rules are necessary to answer the question, and inquire about necessary rules that are not entailed by the conversation history by asking follow-up questions. The three core modules of E3, the extraction, entailment, and decision modules, combine to address these challenges. Figure 2 illustrates the components of E3. For ease of exposition, we describe E3 for a single turn in the conversation. To make the references concrete in the following sections, we use as an example the inputs and outputs from Figure 1. This example describes a turn in a conversation in which the system helps the user determine whether they need to pay UK taxes on their pension. 3.1 Extraction module The extraction module extracts spans from the document that correspond to latent rules. Let xD, xQ, xS, xH,i denote words in the rule text, question, scenario, and the inquiry and user response during the ith previous turn of the dialogue after N turns have passed. We concatenate these inputs into a single sequence x = [xQ; xD; xS; xH,1; · · · xH,N] joined by sentinel tokens that mark the boundaries of each input. To encode the input for the extraction module, we use BERT, a transformer-based model (Vaswani et al., 2017) that achieves consistent gains on a variety of NLP tasks (Devlin et al., 2019). We encode x using the BERT encoder, which first converts words into word piece tokens (Wu et al., 2016), then embeds these tokens along with their positional embeddings and segmentation embeddings. These embeddings are subsequently encoded via a transformer network, which allows for inter-token attention at each layer. Let nx be the number of tokens in the concatenated input x and dU be the output dimension of the BERT encoder. For brevity, we denote the output of the BERT encoder as U = BERT(x) ∈Rnx×dU and refer readers to Devlin et al. (2019) for detailed architecture. In order to extract the implicit decision rules from the document, we compute a start score αi and an end score βi for each ith token as αi = σ (WαUi + bα) ∈R (1) βi = σ (WβUi + bβ) ∈R (2) where Wα, Wβ ∈RdU , bα, bβ ∈R, and σ is the sigmoid function. For each position si where αi is larger than some threshold τ, we find the closest proceeding position ei ≥si where βei > τ. Each pair (si, ei) then forms an extracted span corresponding to a rule Ri expressed in the rule text. In the example in Figure 1, the correct extracted spans are “UK resident” and “UK civil service pensions”. For the ith rule, we use self-attention to build a representation Ai over the span (si, ei). γk = WγUk + bγ ∈R, si ≤k ≤ei (3) γk = softmax (γ)k ∈R, si ≤k ≤ei (4) Ai = ei X k=si γkUk ∈RdU (5) where Wγ ∈RdU and bγ ∈R. Here, γk, γk are respectively the unnormalized and normalized scores for the self-attention layer. 2313 Let nR denote the number spans in the rule text, each of which corresponds to a ground truth rule. The rule extraction loss is computed as the sum of the binary cross entropy losses for each rule Ri. Lre = nR X i Lstart,i + Lend,i (6) Let nD denote the number of tokens in the rule text, si, ei the ground truth start and end positions for the ith rule, and 1f the indicator function that returns 1 if and only if the condition f holds. Recall from Eq (1) that αj and βj denote the probabilities that token j is the start and end of a rule. The start and end binary cross entropy losses for the ith rule are computed as Lstart,i = − nD X j 1j=si log (αj) + 1j̸=si log (1 −αj) Lend,i = − nD X j 1j=ei log (βj) + 1j̸=ei log (1 −βj) 3.2 Entailment module Given the extracted rules R = {R1, · · · RnR}, the entailment module estimates whether each rule is entailed by the conversation history, so that the model can subsequently inquire about rules that are not entailed. For the example in Figure 1, the rule “UK resident” is entailed by the previous inquiry “Are you a UK resident”. In contrast, the rule “UK civil service pensions” is not entailed by either the scenario or the conversation history, so the model needs to inquire about it. In this particular case the scenario does not entail any rule. For each extracted rule, we compute a score that indicates the extent to which this particular rule has already been discussed in the initial scenario S and in previous turns Q. In particular, let N(Ri, S) denote the number of tokens shared by Ri and S, N(Ri) the number of tokens in Ri, and N(S) the number of tokens in S. We compute the scenario entailment score gi as pr(Ri, S) = N(Ri, S) N(Ri) (7) re(Ri, S) = N(Ri, S) N(S) (8) gi = f1(Ri, S) = 2pr(Ri, S)re(Ri, S) pr(Ri, S) + re(Ri, S)(9) where pr, re, and f1 respectively denote the precision, recall, and F1 scores. We compute a similar score to represent the extent to which the rule Ri has been discussed in previous inquiries. Let Qk denote tokens in the kth previous inquiry. We compute the history entailment score hi between the extracted rule Ri and all nH previous inquiries in the conversation history as hi = max k=1,···nH f1(Ri, Qk) (10) The final representation of the ith rule, Ai, is then the concatenation of the span self-attention and the entailment scores. Ai = [Ai; gi; hi] ∈RdU+2 (11) where [x; y] denotes the concatenation of x and y. We also experiment with embedding and encoding similarity based approaches to compute entailment, but find that this F1 approach performs the best. Because the encoder utilizes cross attention between different components of the input, the representations U and Ai are able to capture notions of entailment. However, we find that explicitly scoring entailment via the entailment module further discourages the model from making redundant inquiries. 3.3 Decision module Given the extracted rules R and the entailmentenriched representations for each rule Ai, the decision module decides on a response to the user. These include answering yes/no to the user’s original question, determining that the rule text is irrelevant to the question, or inquiring about a rule that is not entailed but required to answer the question. For the example in Figure 1, the rule “UK civil service pensions” is not entailed, hence the correct decision is to ask a follow-up question about whether the user receives this pension. We start by computing a summary C of the input using self-attention φk = WφUk + bφ ∈R (12) φk = softmax φ  k ∈R (13) C = ei X k=si φkUk ∈RdU (14) where Wφ ∈RdU , bφ ∈R, and φ, φ are respectively the unnormalized and normalized selfattention weights. Next, we score the choices yes, no, irrelevant, and inquire. z = WzC + bz ∈R4 (15) 2314 where z is a vector containing a class score for each of the yes, no, irrelevant, and inquire decisions. For inquiries, we compute an inquiry score ri for each extracted rule Ri. ri = WzAi + bz ∈R (16) where Wz ∈RdU+2 and bz ∈R. Let k indicate the correct decision, and i indicate the correct inquiry, if the model is supposed to make an inquiry. The decision loss is Ldec = −log softmax(z)k (17) −1k=inquire log softmax(r)i During inference, the model first determines the decision d = argmaxkzk. If the decision d is inquire, the model asks a follow-up question about the ith rule such that i = argmaxjrj. Otherwise, the model concludes the dialogue with d. Rephrasing rule into question via editor. In the event that the model chooses to make an inquiry about an extracted rule Ri, Ri is given to an subsequent editor to rephrase into a follow-up question. For the example in 1, the editor edits the span “UK civil service pensions” into the followup question “Are you receiving UK civil service pensions?” Figure 3 illustrates the editor. The editor takes as input xedit = [Ri; xD], the concatenation of the extracted rule to rephrase Ri and the rule text xD. As before, we encode using a BERT encoder to obtain Uedit = BERT(xedit). The encoder is followed by two decoders that respective generate the pre-span edit Ri,pre and postspan edit Ri,post. For the example in Figure 1, given the span “UK civil service pensions”, the pre-span and post span edits that form the question “Are you receiving UK civil service pensions?” are respectively “Are you receiving” and “?” To perform each edit, we employ an attentive decoder (Bahdanau et al., 2015) with Long ShortTerm Memory (LSTM) (Hochreiter and Schmidhuber, 1997). Let ht denote the decoder state at time t. We compute attention at over the input. ζk = Ueditht−1 ∈R (18) ζk = softmax(ζ)k ∈R (19) at = X k ζkUedit,k ∈RdU (20) Let V ∈RnV ×dV denote the embedding matrix corresponding to nV tokens in the vocabulary. Proposed rule Ri <latexit sha1_base64="at/BQF41yKDPfgtKsdfFt0g2d7w=">AB6 nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz32ylsbG5t7xR3S3 v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBN W657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJDUq2XBSmgpiYzP8mQ6QGTG 1hDLF7a2EjamizNh0SjYEb/XldK+qnpu1burVRq1PI4inME5XIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnzmFP3A+fwAido2k</latexit> <latexit sha1_base64="at/BQF41yKDPfgtKsdfFt0g2d7w=">AB6 nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz32ylsbG5t7xR3S3 v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBN W657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJDUq2XBSmgpiYzP8mQ6QGTG 1hDLF7a2EjamizNh0SjYEb/XldK+qnpu1burVRq1PI4inME5XIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnzmFP3A+fwAido2k</latexit> <latexit sha1_base64="at/BQF41yKDPfgtKsdfFt0g2d7w=">AB6 nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz32ylsbG5t7xR3S3 v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBN W657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJDUq2XBSmgpiYzP8mQ6QGTG 1hDLF7a2EjamizNh0SjYEb/XldK+qnpu1burVRq1PI4inME5XIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnzmFP3A+fwAido2k</latexit> <latexit sha1_base64="at/BQF41yKDPfgtKsdfFt0g2d7w=">AB6 nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz32ylsbG5t7xR3S3 v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBN W657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJDUq2XBSmgpiYzP8mQ6QGTG 1hDLF7a2EjamizNh0SjYEb/XldK+qnpu1burVRq1PI4inME5XIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnzmFP3A+fwAido2k</latexit>Ri <latexit sha1_base64="at/BQF41yKDPfgtKsdfFt0g2d7w=">AB6 nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz32ylsbG5t7xR3S3 v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBN W657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJDUq2XBSmgpiYzP8mQ6QGTG 1hDLF7a2EjamizNh0SjYEb/XldK+qnpu1burVRq1PI4inME5XIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnzmFP3A+fwAido2k</latexit> <latexit sha1_base64="at/BQF41yKDPfgtKsdfFt0g2d7w=">AB6 nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz32ylsbG5t7xR3S3 v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBN W657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJDUq2XBSmgpiYzP8mQ6QGTG 1hDLF7a2EjamizNh0SjYEb/XldK+qnpu1burVRq1PI4inME5XIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnzmFP3A+fwAido2k</latexit> <latexit sha1_base64="at/BQF41yKDPfgtKsdfFt0g2d7w=">AB6 nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz32ylsbG5t7xR3S3 v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBN W657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJDUq2XBSmgpiYzP8mQ6QGTG 1hDLF7a2EjamizNh0SjYEb/XldK+qnpu1burVRq1PI4inME5XIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnzmFP3A+fwAido2k</latexit> <latexit sha1_base64="at/BQF41yKDPfgtKsdfFt0g2d7w=">AB6 nicbVDLSgNBEOyNrxhfUY9eBoPgKexKIB4DXjzGRx6QLGF20psMmZ1dZmaFsOQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz32ylsbG5t7xR3S3 v7B4dH5eOTto5TxbDFYhGrbkA1Ci6xZbgR2E0U0igQ2AkmN3O/84RK81g+mCfkRHkoecUWOlh/sBH5QrbtVdgKwTLycVyNEclL/6w5ilEUrDBN W657mJ8TOqDGcCZ6V+qjGhbEJH2LNU0gi1ny1OnZELqwxJGCtb0pCF+nsio5HW0yiwnRE1Y73qzcX/vF5qwms/4zJDUq2XBSmgpiYzP8mQ6QGTG 1hDLF7a2EjamizNh0SjYEb/XldK+qnpu1burVRq1PI4inME5XIHdWjALTShBQxG8Ayv8OYI58V5dz6WrQUnzmFP3A+fwAido2k</latexit> Rule text xD <latexit sha1_base64="I1M3fGSWO3kv4+L5LyBLnhP0+WU=">AB6 nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ 3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj65nfkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uFV3DrJKvJxUIEejX/7qDWKWRigNE1 Trucmxs+oMpwJnJZ6qcaEsjEdYtdSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsT EsoUt7cSNqKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit> <latexit sha1_base64="I1M3fGSWO3kv4+L5LyBLnhP0+WU=">AB6 nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ 3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj65nfkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uFV3DrJKvJxUIEejX/7qDWKWRigNE1 Trucmxs+oMpwJnJZ6qcaEsjEdYtdSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsT EsoUt7cSNqKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit> <latexit sha1_base64="I1M3fGSWO3kv4+L5LyBLnhP0+WU=">AB6 nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ 3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj65nfkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uFV3DrJKvJxUIEejX/7qDWKWRigNE1 Trucmxs+oMpwJnJZ6qcaEsjEdYtdSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsT EsoUt7cSNqKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit> <latexit sha1_base64="I1M3fGSWO3kv4+L5LyBLnhP0+WU=">AB6 nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ 3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj65nfkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uFV3DrJKvJxUIEejX/7qDWKWRigNE1 Trucmxs+oMpwJnJZ6qcaEsjEdYtdSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsT EsoUt7cSNqKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit>xD <latexit sha1_base64="I1M3fGSWO3kv4+L5LyBLnhP0+WU=">AB6 nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ 3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj65nfkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uFV3DrJKvJxUIEejX/7qDWKWRigNE1 Trucmxs+oMpwJnJZ6qcaEsjEdYtdSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsT EsoUt7cSNqKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit> <latexit sha1_base64="I1M3fGSWO3kv4+L5LyBLnhP0+WU=">AB6 nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ 3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj65nfkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uFV3DrJKvJxUIEejX/7qDWKWRigNE1 Trucmxs+oMpwJnJZ6qcaEsjEdYtdSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsT EsoUt7cSNqKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit> <latexit sha1_base64="I1M3fGSWO3kv4+L5LyBLnhP0+WU=">AB6 nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ 3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj65nfkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uFV3DrJKvJxUIEejX/7qDWKWRigNE1 Trucmxs+oMpwJnJZ6qcaEsjEdYtdSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsT EsoUt7cSNqKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit> <latexit sha1_base64="I1M3fGSWO3kv4+L5LyBLnhP0+WU=">AB6 nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeCHjxWtB/QhrLZTtqlm03Y3Ygl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ 3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj65nfkSleSwfzCRBP6JDyUPOqLHS/VP/pl+uFV3DrJKvJxUIEejX/7qDWKWRigNE1 Trucmxs+oMpwJnJZ6qcaEsjEdYtdSPUfjY/dUrOrDIgYaxsSUPm6u+JjEZaT6LAdkbUjPSyNxP/87qpCa/8jMskNSjZYlGYCmJiMvubDLhCZsT EsoUt7cSNqKMmPTKdkQvOWXV0nrouq5Ve+uVqnX8jiKcAKncA4eXEIdbqEBTWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QMkRo2l</latexit> BERT Transformer encoder Pre-span attentive decoder Pre-span edit Ri,pre <latexit sha1_base64="Bz83/3D2qKkrNUWUrgo/f5oyiQ=">AB/ HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0Qv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6Xd vZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkg aU6Pj213AUkzSiQhOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVePg8/Q6dGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXs5EkmoqyOJ QmHKkY1Q0gUZMUqJ5ZgmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ72q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB 3eU3Q=</latexit> <latexit sha1_base64="Bz83/3D2qKkrNUWUrgo/f5oyiQ=">AB/ HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0Qv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6Xd vZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkg aU6Pj213AUkzSiQhOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVePg8/Q6dGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXs5EkmoqyOJ QmHKkY1Q0gUZMUqJ5ZgmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ72q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB 3eU3Q=</latexit> <latexit sha1_base64="Bz83/3D2qKkrNUWUrgo/f5oyiQ=">AB/ HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0Qv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6Xd vZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkg aU6Pj213AUkzSiQhOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVePg8/Q6dGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXs5EkmoqyOJ QmHKkY1Q0gUZMUqJ5ZgmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ72q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB 3eU3Q=</latexit> <latexit sha1_base64="Bz83/3D2qKkrNUWUrgo/f5oyiQ=">AB/ HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0Qv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6Xd vZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkg aU6Pj213AUkzSiQhOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVePg8/Q6dGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXs5EkmoqyOJ QmHKkY1Q0gUZMUqJ5ZgmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ72q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB 3eU3Q=</latexit>Ri,pre <latexit sha1_base64="Bz83/3D2qKkrNUWUrgo/f5oyiQ=">AB/ HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0Qv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6Xd vZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkg aU6Pj213AUkzSiQhOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVePg8/Q6dGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXs5EkmoqyOJ QmHKkY1Q0gUZMUqJ5ZgmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ72q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB 3eU3Q=</latexit> <latexit sha1_base64="Bz83/3D2qKkrNUWUrgo/f5oyiQ=">AB/ HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0Qv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6Xd vZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkg aU6Pj213AUkzSiQhOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVePg8/Q6dGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXs5EkmoqyOJ QmHKkY1Q0gUZMUqJ5ZgmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ72q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB 3eU3Q=</latexit> <latexit sha1_base64="Bz83/3D2qKkrNUWUrgo/f5oyiQ=">AB/ HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0Qv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6Xd vZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkg aU6Pj213AUkzSiQhOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVePg8/Q6dGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXs5EkmoqyOJ QmHKkY1Q0gUZMUqJ5ZgmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ72q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB 3eU3Q=</latexit> <latexit sha1_base64="Bz83/3D2qKkrNUWUrgo/f5oyiQ=">AB/ HicbVBNS8NAFHypX7V+RXv0slgED1ISKeix4MVjFVsLbQib7aZdutmE3Y0Qv0rXjwo4tUf4s1/46bNQVsHFoZ5b3izEyScKe0431ZlbX1jc6u6Xd vZ3ds/sA+PeipOJaFdEvNY9gOsKGeCdjXTnPYTSXEUcPoQTK+L+cMjlYrF4l5nCfUiPBYsZARrI/l2/c7P2TkaRlhPZJQb72zm2w2n6cyBVolbkg aU6Pj213AUkzSiQhOlRq4TqK9HEvNCKez2jBVNMFkisd0YKjAEVePg8/Q6dGaEwluYJjebqb0eOI6WyKDCbRUi1PCvE/2aDVIdXs5EkmoqyOJ QmHKkY1Q0gUZMUqJ5ZgmkpmsiEywxESbvmqmBHf5y6ukd9F0naZ72q0W2UdVTiGEzgDFy6hDTfQgS4QyOAZXuHNerJerHfrY7FasUpPHf7A+vwB 3eU3Q=</latexit> xedit <latexit sha1_base64="rLHYFr2yGjRP7ghXVdRwZNX4urM=">AB+ HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb 39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7 e/RmFCspgKThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk01SQ5aEo40g nqGgBhUxSovnMEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqcApncAEuXEbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON </latexit> <latexit sha1_base64="rLHYFr2yGjRP7ghXVdRwZNX4urM=">AB+ HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb 39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7 e/RmFCspgKThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk01SQ5aEo40g nqGgBhUxSovnMEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqcApncAEuXEbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON </latexit> <latexit sha1_base64="rLHYFr2yGjRP7ghXVdRwZNX4urM=">AB+ HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb 39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7 e/RmFCspgKThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk01SQ5aEo40g nqGgBhUxSovnMEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqcApncAEuXEbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON </latexit> <latexit sha1_base64="rLHYFr2yGjRP7ghXVdRwZNX4urM=">AB+ HicbVDLSsNAFL2pr1ofjbp0M1gEVyWRgi4LblxWsA9oQ5hMJu3QySTMTMQa+iVuXCji1k9x5984abPQ1gMDh3Pu5Z45QcqZ0o7zbVU2Nre2d6q7tb 39g8O6fXTcU0kmCe2ShCdyEGBFORO0q5nmdJBKiuOA034wvSn8/gOViXiXs9S6sV4LFjECNZG8u36oz+KsZ7IOKch03PfbjhNZwG0TtySNKBEx7 e/RmFCspgKThWaug6qfZyLDUjnM5ro0zRFJMpHtOhoQLHVHn5IvgcnRslRFEizRMaLdTfGzmOlZrFgZksQqpVrxD/84aZjq69nIk01SQ5aEo40g nqGgBhUxSovnMEwkM1kRmWCJiTZd1UwJ7uqX10nvsuk6Tfeu1Wi3yjqcApncAEuXEbqEDXSCQwTO8wpv1ZL1Y79bHcrRilTsn8AfW5w91u5ON </latexit> Post-span attentive decoder Post-span edit Ri,post <latexit sha1_base64="0SaTPdbqamEmTDTmYKx4bmgea0=">AB/ XicbVDLSgMxFM3UV62v8bFzEyCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wd na3tndc/cP2lomCpMWlkyqboQ0YVSQlqGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSB UaIbuV38gcKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80P DhEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+og2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+ fwDLJ5Vm</latexit> <latexit sha1_base64="0SaTPdbqamEmTDTmYKx4bmgea0=">AB/ XicbVDLSgMxFM3UV62v8bFzEyCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wd na3tndc/cP2lomCpMWlkyqboQ0YVSQlqGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSB UaIbuV38gcKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80P DhEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+og2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+ fwDLJ5Vm</latexit> <latexit sha1_base64="0SaTPdbqamEmTDTmYKx4bmgea0=">AB/ XicbVDLSgMxFM3UV62v8bFzEyCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wd na3tndc/cP2lomCpMWlkyqboQ0YVSQlqGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSB UaIbuV38gcKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80P DhEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+og2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+ fwDLJ5Vm</latexit> <latexit sha1_base64="0SaTPdbqamEmTDTmYKx4bmgea0=">AB/ XicbVDLSgMxFM3UV62v8bFzEyCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wd na3tndc/cP2lomCpMWlkyqboQ0YVSQlqGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSB UaIbuV38gcKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80P DhEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+og2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+ fwDLJ5Vm</latexit>Ri,post <latexit sha1_base64="0SaTPdbqamEmTDTmYKx4bmgea0=">AB/ XicbVDLSgMxFM3UV62v8bFzEyCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wd na3tndc/cP2lomCpMWlkyqboQ0YVSQlqGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSB UaIbuV38gcKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80P DhEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+og2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+ fwDLJ5Vm</latexit> <latexit sha1_base64="0SaTPdbqamEmTDTmYKx4bmgea0=">AB/ XicbVDLSgMxFM3UV62v8bFzEyCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wd na3tndc/cP2lomCpMWlkyqboQ0YVSQlqGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSB UaIbuV38gcKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80P DhEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+og2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+ fwDLJ5Vm</latexit> <latexit sha1_base64="0SaTPdbqamEmTDTmYKx4bmgea0=">AB/ XicbVDLSgMxFM3UV62v8bFzEyCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wd na3tndc/cP2lomCpMWlkyqboQ0YVSQlqGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSB UaIbuV38gcKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80P DhEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+og2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+ fwDLJ5Vm</latexit> <latexit sha1_base64="0SaTPdbqamEmTDTmYKx4bmgea0=">AB/ XicbVDLSgMxFM3UV62v8bFzEyCykzUtBlwY3LKvYB7TBk0rQNzWNIMkIdBn/FjQtF3Pof7vwbM+0stPVA4HDOvdyTE8WMauN5305pZXVtfaO8Wd na3tndc/cP2lomCpMWlkyqboQ0YVSQlqGkW6sCOIRI51ocp37nQeiNJXi3kxjEnA0EnRIMTJWCt2juzCl57DPkRkrnsZSmywL3apX82aAy8QvSB UaIbuV38gcKJMJghrXu+F5sgRcpQzEhW6SeaxAhP0Ij0LBWIEx2ks/QZPLXKA6lsk8YOFN/b6SIaz3lkZ3MU+pFLxf/83qJGV4FKRVxYojA80P DhEjYV4FHFBFsGFTSxBW1GaFeIwUwsYWVrEl+ItfXibti5rv1fzberVRL+og2NwAs6ADy5BA9yAJmgBDB7BM3gFb86T8+K8Ox/z0ZJT7ByCP3A+ fwDLJ5Vm</latexit> Uedit <latexit sha1_base64="QHUp87PAqosTo6TNrJrX/0JI2M=">A AB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q7bdU2N re2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TWDQSYB84hBP5reln7/EaSiqXjQswCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs 068irRQhW5of43ilOQchCYMKzX03EwHBZaEgbzxihXkGEyxWMYGiowBxUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXIT FRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UdXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov 1bn0sR2tWtXOC/sD6/AE+xZNq</latexit> <latexit sha1_base64="QHUp87PAqosTo6TNrJrX/0JI2M=">A AB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q7bdU2N re2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TWDQSYB84hBP5reln7/EaSiqXjQswCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs 068irRQhW5of43ilOQchCYMKzX03EwHBZaEgbzxihXkGEyxWMYGiowBxUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXIT FRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UdXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov 1bn0sR2tWtXOC/sD6/AE+xZNq</latexit> <latexit sha1_base64="QHUp87PAqosTo6TNrJrX/0JI2M=">A AB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q7bdU2N re2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TWDQSYB84hBP5reln7/EaSiqXjQswCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs 068irRQhW5of43ilOQchCYMKzX03EwHBZaEgbzxihXkGEyxWMYGiowBxUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXIT FRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UdXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov 1bn0sR2tWtXOC/sD6/AE+xZNq</latexit> <latexit sha1_base64="QHUp87PAqosTo6TNrJrX/0JI2M=">A AB+HicbVBNS8NAFNzUr1o/GvXoJVgETyURQY8FLx4rmLbQhrDZvLRLdzdhdyPU0F/ixYMiXv0p3vw3btoctHVgYZh5jzc7Ucao0q7bdU2N re2d+q7jb39g8OmfXTcU2kuCfgkZakcRFgBowJ8TWDQSYB84hBP5reln7/EaSiqXjQswCjseCJpRgbaTQbvrhiGM9kbyAmOp5aLfctruAs 068irRQhW5of43ilOQchCYMKzX03EwHBZaEgbzxihXkGEyxWMYGiowBxUi+Bz59wosZOk0jyhnYX6e6PAXKkZj8xkGVKteqX4nzfMdXIT FRkuQZBloeSnDk6dcoWnJhKIJrNDMFEUpPVIRMsMdGmq4YpwVv98jrpXbY9t+3dX7U6V1UdXSKztAF8tA16qA71EU+IihHz+gVvVlP1ov 1bn0sR2tWtXOC/sD6/AE+xZNq</latexit> Figure 3: The editor of E3. To generate the tth token wt, we use weight tying between the output layer and the embedding matrix (Press and Wolf, 2017). vt = embed(V, wt−1) (21) ht = LSTM ([vt; at], ht−1) ∈RdU (22) ot = Wo[ht; at] + bo ∈RdV (23) p(wt) = softmax(V ot) ∈RnV (24) wt = argmaxkp(wt)k (25) We use a separate attentive decoder to generate the pre-span edit Ri,pre and the post-span edit Ri,post. The decoders share the embedding matrix and BERT encoder but do not share other parameters. The output of the editor is the concatenation of tokens [Ri,pre; Ri; Ri,post]. The editing loss consists of the sequential cross entropy losses from generating the pre-span edit and the post-span edit. Let npre denote the number of tokens and ˆwt,pre the tth tokens in the ground truth pre-span edit. The pre-span loss is Lpre = − npre X t log p( ˆwt,pre) (26) The editing loss is then the sum of the pre-span and post-span losses, the latter of which is obtained in a manner similar to Eq (26). Ledit = Lpre + Lpost (27) 4 Experiment We train and evaluate the Entailment-driven Extract and Edit network on the ShARC CMR dataset. In particular, we compare our method to three other models. Two of these models are proposed by Saeidi et al. (2018). They are an attentive sequence-to-sequence model that attends to the concatenated input and generates the response token-by-token (Seq2Seq), and a strong hand-engineered pipeline model with submodels for entailment, classification, and generation (Pipeline). For the latter, Saeidi et al. (2018) 2315 Model Micro Acc. Macro Acc. BLEU1 BLEU4 Comb. Seq2Seq 44.8 42.8 34.0 7.8 3.3 Pipeline 61.9 68.9 54.4 34.4 23.7 BERTQA 63.6 70.8 46.2 36.3 25.7 E3 (ours) 67.6 73.3 54.1 38.7 28.4 Table 1: Model performance on the blind, held-out test set of ShARC. The evaluation metrics are micro and macroaveraged accuracy in classifying bewteen the decisions yes, no, irrelevant, and inquire. In the event of an inquiry, the generated follow-up question is further evaluated using the BLEU score. In addition to official evaluation metrics, we also show a combined metric (“Comb.”), which is the product between the macro-averaged accuracy and the BLEU4 score. show that these sub-models outperform neural models such as the entailment model by Parikh et al. (2016), and that the combined pipeline outperforms the attentive sequence-to-sequence model. In addition, we propose an extractive QA baseline based on BERT (BERTQA). Similar models achieved state-of-the-art on a variety of QA tasks (Rajpurkar et al., 2016; Reddy et al., 2019). We refer readers to Section A.1 of the appendices for implementation details BERTQA. 4.1 Experimental setup We tokenize using revtok1 and part-of-speech tag (for the editor) using Stanford CoreNLP (Manning et al., 2014). We fine-tune the smaller, uncased pretrained BERT model by Devlin et al. (2019) (e.g. bert-base-uncased).2 We optimize using ADAM (Kingma and Ba, 2015) with an initial learning rate of 5e-5 and a warm-up rate of 0.1. We regularize using Dropout (Srivastava et al., 2014) after the BERT encoder with a rate of 0.4. To supervise rule extraction, we reconstruct full dialogue trees from the ShARC training set and extract all follow-up questions as well as bullet points from each rule text and its corresponding dialogue tree. We then match these extracted clauses to spans in the rule text, and consider these noisy matched spans as supervision for rule extraction. During inference, we use heuristic bullet point extraction3 in conjunction with spans extracted by the rule extraction module. This results in minor performance improvements ( ∼1% micro/macro acc.) over only relying on the rule extraction module. In cases where one rule fully covers another, 1https://github.com/jekbradbury/revtok 2We use the BERT implementation from https://github.com/huggingface/ pytorch-pretrained-BERT 3We extract spans from the text that starts with the “*” character and ends with another “*” character or a new line. we discard the covered shorter rule. Section A.2 details how clause matching is used to obtain noisy supervision for rule extraction. We train the editor separately, as jointly training with a shared encoder worsens performance. The editor is trained by optimizing Ledit while the rest of the model is trained by optimizing Ldec +λLre. We use a rule extraction threshold of τ = 0.5 and a rule extraction loss weight of λ = 400. We perform early stopping using the product of the macro-averaged accuracy and the BLEU4 score. For the editor, we use fixed, pretrained embeddings from GloVe (Pennington et al., 2014), and use dropout after input attention with a rate of 0.4. Before editing retrieved rules, we remove prefix and suffix adpositions, auxiliary verbs, conjunctions, determiners, or punctuation. We find that doing so allows the editor to convert some extracted rules (e.g. or sustain damage) into sensible questions (e.g. did you sustain damage?). 4.2 Results Our performance on the development and the blind, held-out test set of ShARC is shown in Table 1. Compared to previous results, E3 achieves a new state-of-the-art, obtaining best performance on micro and macro-averaged decision classification accuracy and BLEU4 scores while maintaining similar BLEU1 scores. These results show that E3 both answers the user’s original question more accurately, and generates more coherent and relevant follow-up questions. In addition, Figure 4 shows that because E3 explicitly extracts implicit rules from the document, the model’s predictions are explainable in the sense that the user can verify the correctness of the extracted rules and observe how the scenario and previous interactions ground to the extracted rules. 2316 # 1. Overview You get the Additional State Pension automatically if you’re eligible for it, unless you’ve contracted out of it. At no time were my contributions lower than any else’s in the SERP or ever paid into a private pension. Do I get additional state pension automatically? Have you contracted out of the state? Yes Yes: 0.01 No: 0.99 Irrelevant: 0.00 Inquire: 0.0 No Rule text Scenario Question Previous interactions Decision Model response No Ground truth answer Are you eligible for it? Yes 0.28 0.67 0.00 0.72 0.55 0.00 (a) If you are a female Vietnam Veteran with a child who has a birth defect or you are a child of a female Vietnam with a birth defect, the child may be eligible for VA-financed care. I make $14,000 and would like to keep making that until I return to Zimbabwe. Is my child eligible for VA-financed health care? Rule text Scenario Question Yes: 0.04 No: 0.04 Irrelevant: 0.00 Inquire: 0.92 Are you female Vietnam Veteran with a child who has a birth defect? Previous interactions Decision Model response Are you a female Vietnam Veteran? Ground truth answer 0.66 0.00 0.00 0.34 0.00 0.00 (b) Figure 4: Predictions by E3. Extracted spans are underlined in the text. The three scores are the inquiry score ri (blue), history entailment score hi (red), and scenario entailment score gi (green) of the nearest extracted span. Model Micro Acc. Macro Acc. BLEU1 BLEU4 Comb. E3 68.0 73.4 66.9 53.7 39.4 -edit 68.0 73.4 53.1 46.2 31.4 -edit, entail 68.0 73.1 50.2 40.3 29.5 -edit, entail, extract (BERTQA) 63.4 70.6 47.4 37.4 23.7 Table 2: Ablation study of E3 on the development set of ShARC. The ablated variants of E3 include versions: without the editor; without the editor and entailment module; without the editor, entailment module, and extraction module, which reduces to the BERT for question answering model by Devlin et al. (2019). 4.3 Ablation study Table 2 shows an ablation study of E3 on the development set of ShARC. Retrieval outperforms word generation. BERTQA (“-edit, entail, extract”), which E3 reduces to after removing the editor, entailment, and extraction modules, presents a strong baseline that exceeds previous results on all metrics except for BLEU1. This variant inquires about spans extracted from the text, which, while more relevant as indicated by the higher BLEU4 score, does not have the natural qualities of a question, hence it has a lower BLEU1. Nonetheless, the large gains of BERTQA over the attentive Seq2Seq model shows that retrieval is a more promising technique for asking follow-up questions than word-by-word generation. Similar findings were reported for question answering by Yatskar (2019). Extraction of document structure facilitates generalization. Adding explicit extraction of rules in the document (“-edit, entail”) forces the model to interpret all rules in the document versus only focusing on extracting the next inquiry. This results in better performance in both decision classification and inquiry relevance compared to the variant that is not forced to interpret all rules. Modeling entailment improves rule retrieval. The “-edit” model explicitly models whether an extracted rule is entailed by the user scenario and previous turns. Modeling entailment allows the model to better predict whether a rule is entailed, 2317 yes no irrelevant inquire Predicted label yes no irrelevant inquire True label 530 147 0 127 117 541 0 108 0 0 133 5 107 113 2 340 0 100 200 300 400 500 Figure 5: Confusion matrix of decision predictions on the development set of ShARC. and thus more often inquire about rules that are not entailed. Figure 4a illustrates one such example in which both extracted rules have high entailment score, and the model chooses to conclude the dialogue by answering no instead of making further inquiries. Adding entailment especially improves in BLEU4 score, as the inquiries made by the model are more relevant and appropriate. Editing retrieved rules results in more fluid questions. While E3 without the editor is able to retrieve rules that are relevant, these spans are not fluent questions that can be presented to the user. The editor is able to edit the extracted rules into more fluid and coherent questions, which results further gains particularly in BLEU1. 4.4 Error analysis In addition to ablation studies, we analyze errors E3 makes on the development set of ShARC. Decision errors. Figure 5 shows the confusion matrix of decisions. We specifically examine examples in which E3 produces an incorrect decision. On the ShARC development set there are 726 such cases, which correspond to a 32.0% error rate. We manually analyze 100 such examples to identify commons types of errors. Within these, in 23% of examples, the model attempts to answer the user’s initial question without resolving a necessary rule despite successfully extracting the rule. In 19% of examples, the model identifies and inquires about all necessary rules but comes to the wrong conclusion. In 18% of examples, the model makes a redundant inquiry about a rule that is entailed. In 17% of examples, the rule text contains ambiguous rules. Figure 4b contains one such example in which the annotator identified the rule “a female Vietnam Veteran”, while the model extracted an alternative longer rule “a female Vietnam Veteran with a child who has a birth defect”. Finally, in 13% of examples, the model fails to extract some rule from the document. Other less common forms of errors include failures by the entailment module to perform numerical comparison, complex rule procedures that are difficult to deduce, and implications that require world knowledge. These results suggests that improving the decision process after rule extraction is an important area for future work. Inquiry quality. On 340 examples (15%) in the ShARC development set, E3 generates an inquiry when it is supposed to. We manually analyze 100 such examples to gauge the quality of generated inquiries. On 63% of examples, the model generates an inquiry that matches the ground-truth. On 14% of examples, the model makes inquires in a different order than the annotator. On 12% of examples, the inquiry refers to an incorrect subject (e.g. “are you born early” vs. “is your baby born early”. This usually results from editing an entityless bullet point (“* born early”). On 6% of examples, the inquiry is lexically similar to the ground truth but has incorrect semantics (e.g. “do you need savings” vs. “is this information about your savings”). Again, this tends to result from editing short bullet points (e.g. “* savings”). These results indicate that when the model correctly chooses to inquire, it largely inquires about the correct rule. They also highlight a difficulty in evaluating CMR — there can be several correct orderings of inquiries for a document. 5 Conclusion We proposed the Entailment-driven Extract and Edit network (E3), a conversational machine reading model that extracts implicit decision rules from text, computes whether each rule is entailed by the conversation history, inquires about rules that are not entailed, and answers the user’s question. E3 achieved a new state-of-the-art result on the ShARC CMR dataset, outperforming existing systems as well as a new extractive QA baseline based on BERT. In addition to achieving strong performance, we showed that E3 provides a more explainable alternative to prior work which do not model document structure. 2318 Acknowledgments This research was supported in part by the ARO (W911NF-16-1-0121) and the NSF (IIS-1252835, IIS-1562364). We thank Terra Blevins, Sewon Min, and our anonymous reviewers for helpful feedback. References Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In ACL. Gabor Angeli and Christopher D. Manning. 2014. Naturalli: Natural logic inference for common sense reasoning. In EMNLP. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In NIPS. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In EMNLP. Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2018. Convolutional 2D knowledge graph embeddings. In AAAI. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In SIGDIAL. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In EMNLP. Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In EMNLP. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In ACL. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL. B. Moulin and D. Rousseau. 1992. Automated knowledge acquisition from regulatory texts. IEEE Expert. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In ACL. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In ACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100, 000+ questions for machine comprehension of text. In EMNLP. Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. CoQA: A conversational question answering challenge. TACL. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In NAACL. Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rockt¨aschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. In EMNLP. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR. Pei-Hao Su, Milica Gasic, Nikola Mrkˇsi´c, Lina M. Rojas Barahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016. On-line active reward learning for policy optimisation in spoken dialogue systems. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In EMNLP. 2319 Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gaˇsi´c, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In EACL. Jason D Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In SIGDIAL. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Mark Yatskar. 2019. A qualitative comparison of coqa, squad 2.0 and quac. In NAACL. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive dialogue state tracker. In ACL. 2320 A Appendices A.1 BertQA Baseline Our BertQA baseline follows that proposed by Devlin et al. (2019) for the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016). Due to the differences in context between ShARC and SQuAD, we augment the input to the BERTQA model in a manner similar to Section 3.1. The distinction here is that we additionally add the decision types “yes”, “no”, and “irrelevant” as parts of the input such that the problem is fully solvable via span extraction. Similar to Section 3.1, let U denote the BERT encoding of the length-n input sequence. The BERTQA model predicts a start score s and an end score e. s = softmax(UWs + bs) ∈Rn (28) e = softmax(UWe + be) ∈Rn (29) We take the answer as the span (i, j) that gives the highest score siej such that j >= i. Because we augment the input with decision labels, the model can be fully supervised via extraction endpoints. A.2 Creating noisy supervision for span extraction via span matching The ShARC dataset is constructed from full dialogue trees in which annotators exhaustively annotate yes/no branches of follow-up questions. Consequently, each rule required to answer the initial user question forms a follow-up question in the full dialogue tree. In order to identify rule spans in the document, we first reconstruct the dialogue trees for all training examples in ShARC. For each document, we trim each follow-up question in its corresponding dialogue tree by removing punctuation and stop words. For each trimmed question, we find the shortest best-match span in the document that has the least edit distance from the trimmed question, which we take as the corresponding rule span. In addition, we extract similarly trimmed bullet points from the document as rule spans. Finally, we deduplicate the rule spans by removing those that are fully covered by a longer rule span. Our resulting set of rule spans are used as noisy supervision for the rule extraction module. This preprocessing code is included with our code release.
2019
223
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2321–2334 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2321 Generating Question-Answer Hierarchies Kalpesh Krishna & Mohit Iyyer College of Information and Computer Sciences University of Massachusetts Amherst {kalpesh,miyyer}@cs.umass.edu http://squash.cs.umass.edu/ Abstract The process of knowledge acquisition can be viewed as a question-answer game between a student and a teacher in which the student typically starts by asking broad, open-ended questions before drilling down into specifics (Hintikka, 1981; Hakkarainen and Sintonen, 2002). This pedagogical perspective motivates a new way of representing documents. In this paper, we present SQUASH (Specificity-controlled Question-Answer Hierarchies), a novel and challenging text generation task that converts an input document into a hierarchy of question-answer pairs. Users can click on high-level questions (e.g., “Why did Frodo leave the Fellowship?”) to reveal related but more specific questions (e.g., “Who did Frodo leave with?”). Using a question taxonomy loosely based on Lehnert (1978), we classify questions in existing reading comprehension datasets as either GENERAL or SPECIFIC. We then use these labels as input to a pipelined system centered around a conditional neural language model. We extensively evaluate the quality of the generated QA hierarchies through crowdsourced experiments and report strong empirical results. 1 Introduction Q: What is this paper about? A: We present a novel text generation task which converts an input document into a modelgenerated hierarchy of question-answer (QA) pairs arranged in a top-down tree structure (Figure 1). Questions at higher levels of the tree are broad and open-ended while questions at lower levels ask about more specific factoids. An entire document has multiple root nodes (“key ideas”) that unfold into a forest of question trees. While readers are initially shown only the root nodes of the question trees, they can “browse” the document by clicking on root nodes of interest Massive Attack (band) On 21 January 2016, the iPhone application "Fantom" was released. The application was developed by a team including Massive Attack's Robert Del Naja and let users hear parts of four new songs by remixing them in real time, using the phone's location, movement, clock, heartbeat, and camera. On 28 January 2016, Massive Attack released a new EP, Ritual Spirit, which includes the four songs released on Fantom. Q. What was the iPhone application Fantom? A. The app... let users hear parts of ... real time, Q. Who created it? A. ... team including ... Robert Del Naja On 26 July 2016, Massive Attack previewed three new songs: "Come Near Me", "The Spoils", and "Dear Friend" on Fantom, an iPhone application on which they previously previewed the four songs from the Ritual Spirit EP ... The video for "The Spoils", featuring Cate Blanchett, and directed by Australian director John Hillcoat, ... Q. What is Ritual Spirit? A. On ... Attack released a new EP Ritual Spirit Q. What did they do in 2016? A. On ... 2016 ... three new songs: "Come Near Me", "The Spoils", and "Dear Friend" on Fantom, Q. Who was in the video? A. The video ... featuring Cate Blanchett, Q. Who was the Australian director? A. John Hillcoat, Q. When did they release a song with "Fantom''? A. On 28 January 2016, ... ... Figure 1: A subset of the QA hierarchy generated by our SQUASH system that consists of GENERAL and SPECIFIC questions with extractive answers. to reveal more fine-grained related information. We call our task SQUASH (Specificity-controlled Question Answer Hierarchies). Q: Why represent a document with QA pairs?1 A: Questions and answers (QA) play a critical role in scientific inquiry, information-seeking dialogue and knowledge acquisition (Hintikka, 1981, 1988; Stede and Schlangen, 2004). For example, web users often use QA pairs to manage and share knowledge (Wagner, 2004; Wagner and Bolloju, 2005; Gruber, 2008). Additionally, unstructured lists of “frequently asked questions” (FAQs) are regularly deployed at scale to present information. Industry studies have demonstrated their effectiveness at cutting costs associated with answering customer calls or hiring technical experts (Davenport et al., 1998). Automating the generation of QA pairs can thus be of immense value to companies and web communities. 1Our introduction is itself an example of the QA format. Other academic papers such as Henderson et al. (2018) have also used this format to effectively present information. 2322 Q: Why add hierarchical structure to QA pairs? A: While unstructured FAQs are useful, pedagogical applications benefit from additional hierarchical organization. Hakkarainen and Sintonen (2002) show that students learn concepts effectively by first asking general, explanationseeking questions before drilling down into more specific questions. More generally, hierarchies break up content into smaller, more digestable chunks. User studies demonstrate a strong preference for hierarchies in document summarization (Buyukkokten et al., 2001; Christensen et al., 2014) since they help readers easily identify and explore key topics (Zhang et al., 2017). Q: How do we build systems for SQUASH? A: We leverage the abundance of reading comprehension QA datasets to train a pipelined system for SQUASH. One major challenge is the lack of labeled hierarchical structure within existing QA datasets; we tackle this issue in Section 2 by using the question taxonomy of Lehnert (1978) to classify questions in these datasets as either GENERAL or SPECIFIC. We then condition a neural question generation system on these two classes, which enables us to generate both types of questions from a paragraph. We filter and structure these outputs using the techniques described in Section 3. Q: How do we evaluate our SQUASH pipeline? A: Our crowdsourced evaluation (Section 4) focuses on fundamental properties of the generated output such as QA quality, relevance, and hierarchical correctness. Our work is a first step towards integrating QA generation into document understanding; as such, we do not directly evaluate how useful SQUASH output is for downstream pedagogical applications. Instead, a detailed qualitative analysis (Section 5) identifies challenges that need to be addressed before SQUASH can be deployed to real users. Q: What are our main contributions? A1: A method to classify questions according to their specificity based on Lehnert (1978). A2: A model controlling specificity of generated questions, unlike prior work on QA generation. A3: A novel text generation task (SQUASH), which converts documents into specificity-based hierarchies of QA pairs. A4: A pipelined system to tackle SQUASH along with crowdsourced methods to evaluate it. Q: How can the community build on this work? A: We have released our codebase, crowdsourcing templates for evaluation, and a live demonstration of our system at http://squash.cs. umass.edu/. Additionally, we outline guidelines for future work in Section 7. 2 Obtaining training data for SQUASH The proliferation of reading comprehension datasets like SQuAD (Rajpurkar et al., 2016, 2018) has enabled state-of-the-art neural question generation systems (Du et al., 2017; Kim et al., 2018). However, these systems are trained for individual question generation, while the goal of SQUASH is to produce a general-to-specific hierarchy of QA pairs. Recently-released conversational QA datasets like QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2018) contain a sequential arrangement of QA pairs, but question specificity is not explicitly marked.2 Motivated by the lack of hierarchical QA datasets, we automatically classify questions in SQuAD, QuAC and CoQA according to their specificity using a combination of rule-based and automatic approaches. 2.1 Rules for specificity classification What makes one question more specific than another? Our scheme for classifying question specificity maps each of the 13 conceptual question categories defined by Lehnert (1978) to three coarser labels: GENERAL, SPECIFIC, or YES-NO.3 As a result of this mapping, SPECIFIC questions usually ask for low-level information (e.g., entities or numerics), while GENERAL questions ask for broader overviews (e.g., “what happened in 1999?”) or causal information (e.g, “why did...”). Many question categories can be reliably identified using simple templates and rules; A complete list is provided in Table 1.4 Classifying questions not covered by templates: If a question does not satisfy any template or rule, how do we assign it a label? We manage to clas2“Teachers” in the QuAC set-up can encourage “students” to ask a follow-up question, but we cannot use these annotations to infer a hierarchy because students are not required to actually follow their teachers’ directions. 3We add a third category for YES-NO questions as they are difficult to classify as either GENERAL or SPECIFIC. 4Questions in Lehnert (1978) were classified using a conceptual dependency parser (Schank, 1972). We could not find a modern implementation of this parser and thus decided to use a rule-based approach that relies on spaCy 2.0 (Honnibal and Montani, 2017) for all preprocessing. 2323 Conceptual class Specificity Question asks for... Sample templates Causal Antecendent, Goal Oriented, Enablement, Causal Consequent, Expectational GENERAL the reason for occurrence of an event and the consequences of it Why ..., What happened after / before ..., What was the cause / reason / purpose ..., What led to ... Instrumental GENERAL a procedure / mechanism How question with VERB parent for How in dependency tree Judgemental GENERAL a listener’s opinion Words like you, your present Concept Completion, Feature Specification GENERAL or SPECIFIC fill-in-the-blank information Where / When / Who ... (“SPECIFIC” templates) Quantification SPECIFIC an amount How many / long ... Verification, Disjunctive YES-NO Yes-No answers first word is VERB Request N/A an act to be performed (absent in datasets) Table 1: The 13 conceptual categories of Lehnert (1978) and some templates to identify them and their specificity. sify roughly half of all questions with our templates and rules (Table A1); for the remaining half, we resort to a data-driven approach. First, we manually label 1000 questions in QuAC5 using our specificity labels. This annotated data is then fed to a single-layer CNN binary classifier (Kim, 2014) using ELMo contextualized embeddings (Peters et al., 2018).6 On a 85%-15% train-validation split, we achieve a high classification accuracy of 91%. The classifier also transfers to other datasets: on 100 manually labeled CoQA questions, we achieve a classification accuracy of 80%. To obtain our final dataset (Table 2), we run our rule-based approach on all questions in SQuAD 2.0, QuAC, and CoQA and apply our classifier to label questions that were not covered by the rules. We further evaluate the specificity of the questions generated by our final system using a crowdsourced study in Section 4.3. Dataset Size GENERAL SPECIFIC YES-NO SQuAD 86.8k 28.2% 69.7% 2.1% QuAC 65.2k 34.9% 33.5% 31.6% CoQA 105.6k 23.6% 54.9% 21.5% All 257.6k 28.0% 54.5% 17.5% Table 2: Distribution of classes in the final datasets. We add some analysis on this distribution in Appendix A. 3 A pipeline for SQUASHing documents To SQUASH documents, we build a pipelined system (Figure 2) that takes a single paragraph as input and produces a hierarchy of QA pairs as output; for multi-paragraph documents, we SQUASH 5We use QuAC because its design encourages a higher percentage of GENERAL questions than other datasets, as the question-asker was unable to read the document to formulate more specific questions. 6Implemented in AllenNLP (Gardner et al., 2018). each paragraph independently of the rest. At a high level, the pipeline consists of five steps: (1) answer span selection, (2) question generation conditioned on answer spans and specificity labels, (3) extractively answering generated questions, (4) filtering out bad QA pairs, and (5) structuring the remaining pairs into a GENERAL-toSPECIFIC hierarchy. The remainder of this section describes each step in more detail. 3.1 Answer span selection Our pipeline begins by selecting an answer span from which to generate a question. To train the system, we can use ground-truth answer spans from our labeled datasets, but at test time how do we select answer spans? Our solution is to consider all individual sentences in the input paragraph as potential answer spans (to generate GENERAL and SPECIFIC questions), along with all entities and numerics (for just SPECIFIC questions). We did not use data-driven sequence tagging approaches like previous work (Du and Cardie, 2017, 2018), since our preliminary experiments with such approaches yielded poor results on QuAC.7 More details are provided in Appendix C. 3.2 Conditional question generation Given a paragraph, answer span, and desired specificity label, we train a neural encoderdecoder model on all three reading comprehension datasets (SQuAD, QuAC and CoQA) to generate an appropriate question. Data preprocessing: At training time, we use the ground-truth answer spans from these datasets 7We hypothesize that answer span identification on QuAC is difficult because the task design encouraged “teachers” to provide more information than just the minimal answer span. 2324 RC Datasets Does Q match template? Specificity Classifier Question Generation No Span Selection Document Span Class Question Answering x13 questions QA Filtering Building QA Hierarchy Yes (Similar Pipeline) SQUASH Output Training Data x13 QA GENERAL SPECIFIC Figure 2: An overview of the process by which we generate a pair of GENERAL-SPECIFIC questions , which consists of feeding input data (“RC” is Reading Comprehension) through various modules, including a question classifier and a multi-stage pipeline for question generation, answering, and filtering. as input to the question generator. To improve the quality of SPECIFIC questions generated from sentence spans, we use the extractive evidence spans for CoQA instances (Reddy et al., 2018) instead of the shorter, partially abstractive answer spans (Yatskar, 2019). In all datasets, we remove unanswerable questions and questions whose answers span multiple paragraphs. A few very generic questions (e.g. “what happened in this article?”) were manually identified removed from the training dataset. Some other questions (e.g., “where was he born?”) are duplicated many times in the dataset; we downsample such questions to a maximum limit of 10. Finally, we preprocess both paragraphs and questions using byte-pair encoding (Sennrich et al., 2016). Architecture details: We use a two-layer biLSTM encoder and a single-layer LSTM (Hochreiter and Schmidhuber, 1997) decoder with soft attention (Bahdanau et al., 2015) to generate questions, similar to Du et al. (2017). Our architecture is augmented with a copy mechanism (See et al., 2017) over the encoded paragraph representations. Answer spans are marked with <SOA> and <EOA> tokens in the paragraph, and representations for tokens within the answer span are attended to by a separate attention head. We condition the decoder on the specificity class (GENERAL, SPECIFIC and YES-NO)8 by concatenating an embedding for the ground-truth class to the input of each time step. We implement models in PyTorch v0.4 (Paszke et al., 2017), and the best-performing model achieves a perplexity of 11.1 on the validation set. Other hyperparameters details are provided in Appendix B. Test time usage: At test time, the question generation module is supplied with answer spans and 8While we do not use YES-NO questions at test time, we keep this class to avoid losing a significant proportion of training data. class labels as described in Section 3.1. To promote diversity, we over-generate prospective candidates (Heilman and Smith, 2010) for every answer span and later prune them. Specifically, we use beam search with a beam size of 3 to generate three highly-probable question candidates. As these candidates are often generic, we additionally use top-k random sampling (Fan et al., 2018) with k = 10, a recently-proposed diversity-promoting decoding algorithm, to generate ten more question candidates per answer span. Hence, for every answer span we generate 13 question candidates. We discuss issues with using just standard beam search for question generation in Section 5.1. 3.3 Answering generated questions While we condition our question generation model on pre-selected answer spans, the generated questions may not always correspond to these input spans. Sometimes, the generated questions are either unanswerable or answered by a different span in the paragraph. By running a pretrained QA model over the generated questions, we can detect questions whose answers do not match their original input spans and filter them out. The predicted answer for many questions has partial overlap with the original answer span; in these cases, we display the predicted answer span during evaluation, as a qualitative inspection shows that the predicted answer is more often closer to the correct answer. For all of our experiments, we use the AllenNLP implementation of the BiDAF++ question answering model of Choi et al. (2018) trained on QuAC with no dialog context. 3.4 Question filtering After over-generating candidate questions from a single answer span, we use simple heuristics to filter out low-quality QA pairs. We remove 2325 generic and duplicate question candidates9 and pass the remaining QA pairs through the multistage question filtering process described below. Irrelevant or repeated entities: Top-k random sampling often generates irrelevant questions; we reduce their incidence by removing any candidates that contain nouns or entities unspecified in the passage. As with other neural text generation systems (Holtzman et al., 2018), we commonly observe repetition in the generated questions and deal with this phenomenon by removing candidates with repeated nouns or entities. Unanswerable or low answer overlap: We remove all candidates marked as “unanswerable” by the question answering model, which prunes 39.3% of non-duplicate question candidates. These candidates are generally grammatically correct but considered irrelevant to the original paragraph by the question answering model. Next, we compute the overlap between original and predicted answer span by computing word-level precision and recall (Rajpurkar et al., 2016). For GENERAL questions generated from sentence spans, we attempt to maximize recall by setting a minimum recall threshold of 0.3.10 Similarly, we maximize recall for SPECIFIC questions generated from named entities with a minimum recall constraint of 0.8. Finally, for SPECIFIC questions generated from sentence spans, we set a minimum precision threshold of 1.0, which filters out questions whose answers are not completely present in the ground-truth sentence. Low generation probability: If multiple candidates remain after applying the above filtering criteria, we select the most probable candidate for each answer span. SPECIFIC questions generated from sentences are an exception to this rule: for these questions, we select the ten most probable candidates, as there might be multiple questionworthy bits of information in a single sentence. If no candidates remain, in some cases11 we use a fallback mechanism that sequentially ignores filters to retain more candidates. 9Running Top-k random sampling multiple times can produce duplicate candidates, including those already in the top beams. 10Minimum thresholds were qualitatively chosen based on the specificity type. 11For example, if no valid GENERAL questions for the entire paragraph are generated. Subsequently, Yoda battles Palpatine in a lightsaber duel that wrecks the Senate Rotunda. In the end, neither is able to overcome the other and Yoda is forced to retreat. He goes into exile on Dagobah so that he may hide from the Empire and wait for another opportunity to destroy the Sith. At the end of the film, it was revealed that Yoda has been in contact with Qui-Gon's spirit, learning the secret of immortality from him and passing it on to Obi-Wan. GQ. What happened in the battle with Palpatine? SQ. Where was the battle? SQ. Where did he go on exile? GQ. What is revealed at the end of the film? SQ. Who does he want to destroy? Figure 3: Procedure used to form a QA hierarchy. The predicted answers for GQs ( GENERAL questions), are underlined in blue. The predicted answers for SQs ( SPECIFIC questions) are highlighted in red . 3.5 Forming a QA hierarchy The output of the filtering module is an unstructured list of GENERAL and SPECIFIC QA pairs generated from a single paragraph. Figure 3 shows how we group these questions into a meaningful hierarchy. First, we choose a parent for each SPECIFIC question by maximizing the overlap (word-level precision) of its predicted answer with the predicted answer for every GENERAL question. If a SPECIFIC question’s answer does not overlap with any GENERAL question’s answer (e.g., “Dagobah” and “destroy the Sith”) we map it to the closest GENERAL question whose answer occurs before the SPECIFIC question’s answer (“What happened in the battle ...?”).12 4 Evaluation We evaluate our SQUASH pipeline on documents from the QuAC development set using a variety of crowdsourced13 experiments. Concretely, we evaluate the quality and relevance of individual questions, the relationship between generated questions and predicted answers, and the structural properties of the QA hierarchy. We emphasize that our experiments examine only the quality of a SQUASHed document, not its actual usefulness to downstream users. Evaluating usefulness (e.g., measuring if SQUASH is more helpful than the input document) requires systematic and targeted human studies (Buyukkokten et al., 2001) that are beyond the scope of this work. 12This heuristic is justified because users read GENERAL questions before SPECIFIC ones in our interface. 13All our crowdsourced experiments were conducted on the Figure Eight platform with three annotators per example (scores calculated by counting examples with two or more correct judgments). We hired annotators from predominantly English-speaking countries with a rating of at least Level 2, and we paid them between 3 and 4 cents per judgment. 2326 Experiment Generated Gold Score Fleiss κ Score Fleiss κ Is this question well-formed? 85.8% 0.65 93.3% 0.54 Is this question relevant? 78.7% 0.36 83.3% 0.41 (among well-formed) 81.1% 0.39 83.3% 0.40 Does the span partially contain the answer? 85.3% 0.45 81.1% 0.43 (among well-formed) 87.6% 0.48 82.1% 0.42 (among well-formed and relevant) 94.9% 0.41 92.9% 0.44 Does the span completely contain the answer? 74.1% 0.36 70.0% 0.37 (among well-formed) 76.9% 0.36 70.2% 0.39 (among well-formed and relevant) 85.4% 0.30 80.0% 0.42 Table 3: Human evaluations demonstrate the high individual QA quality of our pipeline’s outputs. All interannotator agreement scores (Fleiss κ) show “fair” to “substantial” agreement (Landis and Koch, 1977). 4.1 Individual question quality and relevance Our first evaluation measures whether questions generated by our system are well-formed (i.e., grammatical and pragmatic). We ask crowd workers whether or not a given question is both grammatical and meaningful.14 For this evaluation, we acquire judgments for 200 generated QA pairs and 100 gold QA pairs15 from the QuAC validation set (with an equal split between GENERAL and SPECIFIC questions). The first row of Table 3 shows that 85.8% of generated questions satisfy this criterion with a high agreement across workers. Question relevance: How many generated questions are actually relevant to the input paragraph? While the percentage of unanswerable questions that were generated offers some insight into this question, we removed all of them during the filtering pipeline (Section 3.4). Hence, we display an input paragraph and generated question to crowd workers (using the same data as the previous wellformedness evaluation) and ask whether or not the paragraph contains the answer to the question. The second row of Table 3 shows that 78.7% of our questions are relevant to the paragraph, compared to 83.3% of gold questions. 4.2 Individual answer validity Is the predicted answer actually a valid answer to the generated question? In our filtering pro14As “meaningful” is potentially a confusing term for crowd workers, we ran another experiment asking only for grammatical correctness and achieved very similar results. 15Results on this experiment were computed after removing 3 duplicate generated questions and 10 duplicate gold questions. cess, we automatically measured answer overlap between the input answer span and the predicted answer span and used the results to remove lowoverlap QA pairs. To evaluate answer recall after filtering, we perform a crowdsourced evaluation on the same 300 QA pairs as above by asking crowdworkers whether or not a predicted answer span contains the answer to the question. We also experiment with a more relaxed variant (partially contains instead of completely contains) and report results for both task designs in the third and fourth rows of Table 3. Over 85% of predicted spans partially contain the answer to the generated question, and this number increases if we consider only questions that were previously labeled as well-formed and relevant. The lower gold performance is due to the contextual nature of the gold QA pairs in QuAC, which causes some questions to be meaningless in isolation (e.g.“What did she do next?” has unresolvable coreferences). Experiment Score Fleiss κ Which question type asks for more information? 89.5% 0.57 Which SPECIFIC question is closer to GENERAL QA? different paragraph 77.0% 0.47 same paragraph 64.0% 0.30 Table 4: Human evaluation of the structural correctness of our system. The labels “different / same paragraph” refer to the location of the intruder question. The results show the accuracy of specificity and hierarchies. 4.3 Structural correctness To examine the hierachical structure of SQUASH ed documents, we conduct three experiments. 2327 Cowell formed a new company Syco, which is divided into three units - Syco Music, Syco TV and Syco Film. Cowell returned to music with his latest brainchild signed to Syco ... What is Syco? How many units does Syco have? Returning home to Brantford after six months abroad, Bell continued experiments with his "harmonic telegraph". The basic concept behind his device was that messages could ... What was Bell's telegraph? Where did he take his experiments?    After five years, however, Limon would return to Broadway to star as a featured dancer in Keep Off the Grass under the choreographer George Balanchine. Why did he return to Broadway? Who did he work with? Tan Dun earned widespread attention after composing the score for Ang Lee's Crouching Tiger, Hidden Dragon (2000), for which he won an Academy Award, a Grammy Award .... How was Tan Dun received? What award did he win? From 1969 to 1971, Cash starred in his own television show, The Johnny Cash Show, on the ABC network. The show was performed at the Ryman Auditorium in Nashville. ... What did he do in 1969? What network was he in? Figure 4: SQUASH question hierarchies generated by our system with reference snippets . Questions in the hierarchy are of the correct specificity class (i.e., GENERAL , SPECIFIC ). How faithful are output questions to input specificity? First, we investigate whether our model is actually generating questions with the correct specificity label. We run our specificity classifier (Section 2) over 400 randomly sampled questions (50% GENERAL, 50% SPECIFIC) and obtain a high classification accuracy of 91%.16 This automatic evaluation suggests the model is capable of generating different types of questions. Are GENERAL questions more representative of a paragraph than SPECIFIC questions? To see if GENERAL questions really do provide more high-level information, we sample 200 GENERAL-SPECIFIC question pairs17 grouped together as described in Section 3.5. For each pair of questions (without showing answers), we ask crowd workers to choose the question which, if answered, would give them more information about the paragraph. As shown in Table 4, in 89.5% instances the GENERAL question is preferred over the SPECIFIC one, which confirms the strength of our specificity-controlled question generation system.18 How related are SPECIFIC questions to their parent GENERAL question? Finally, we investigate the effectiveness of our question grouping strategy, which bins multiple SPECIFIC QA pairs under a single GENERAL QA pair. We show crowd workers a reference GENERAL QA pair and ask them to choose the most related SPECIFIC question given two choices, one of which is the system’s output and the other an intruder question. 16Accuracy computed after removing 19 duplicates. 17We avoid gold-standard control experiments for structural correctness tests since questions in the QuAC dataset were not generated with a hierarchical structure in mind. Pilot studies using our question grouping module on gold data led to sparse hierarchical structures which were not favored by our crowd workers. 18We also ran a pilot study asking workers “Which question has a longer answer?” and observed a higher preference of 98.6% for GENERAL questions. Weston was born Paul Wetstein in Springfield, Massachusetts, to Paul Wetstein, a teacher, and Anna "Annie" Grady. The family moved to Pittsfield when Weston was two, and he spent his formative years in the town. His parents were both interested in music, and when Paul Sr taught at a private girls' school, he was allowed to bring the school's gramophone ... Q. What are his parents like? A. Paul Wetstein, a teacher, and Anna "Annie" Grady. Q. Who was born in Springfield? A. Weston was born Paul Wetstein in Springfield, Massachusetts, to Paul Wetstein, a teacher, and Anna "Annie" Grady. Q. Where was Weston born? A. Springfield, Massachusetts, Q. Who were his parents? A. Paul Wetstein, a teacher, and Anna "Annie" Grady. Q. Where did he move to? A. The family moved to Pittsfield Q. How old was Weston when he was born? A. two Q. How did he get into music? A. His parents were both interested in music, and when Paul Sr taught at a private girls' school, Q. Where did he go to school? A. Paul Sr taught at a private girls' school, Paul Weston ...The treaty granted the United States control of Puerto Rico, Guam, Cuba, the Philippines, and parts of the West Indies. Many of Bryan's supporters were opposed to what they perceived as Republican aspirations of turning the country into an imperial power ... However, when the Bacon Resolution (a proposed supplement to the Treaty of Paris which would allow the Filipinos a "stable and independent government") failed to pass, Bryan began publicly speaking out against the Republicans' imperial aspirations. William Bryan Q. What was the treaty? A. The treaty granted the United States control of Puerto Rico, Guam, Cuba, the Philippines, and parts of the West Indies. Q. Where did the Treaty of Paris come from? A. The treaty granted the United States control of Puerto Rico, Guam, Cuba, the Philippines, and parts of the West Indies. Q. Why was this bad? A. Many of Bryan's supporters were opposed to what they perceived as Republican aspirations of turning the country into an imperial power Q. What was a result of the resolution? A. failed to pass, Bryan began publicly speaking out against the Republicans' imperial aspirations. Figure 5: Two SQUASH outputs generated by our system. The William Bryan example has interesting GENERAL questions. The Paul Weston example showcases several mistakes our model makes. We randomly select intruder SPECIFIC questions from either a different paragraph within the same document or a different group within the same paragraph. As shown in Table 4, crowd workers prefer the system’s generated SPECIFIC question with higher than random chance (50%) regardless of where the intruder comes from. As expected, the preference and agreement is higher when intruder questions come from different paragraphs, since groups within the same paragraph often contain related information (Section 5.2). 5 Qualitative Analysis In this section we analyze outputs (Figure 4, Figure 5) of our pipeline and identify its strengths and weaknesses. We additionally provide more examples in the appendix (Figure A1). 2328 “In 1942, Dodds enlisted in the US army and served as an anti aircraft gunner during World War II.” B In what year did the US army take place? In what year did the US army take over? In what year did the US army take place in the US? T What year was he enlisted? When did he go to war? When did he play as anti aircraft? Table 5: Beam Search (B) vs Top-k sampling (T) for SPECIFIC question generation. Top-k candidates tend to be more diverse. 5.1 What is our pipeline good at? Meaningful hierarchies: Our method of grouping the generated questions (Section 3.5) produces hierarchies that clearly distinguish between GENERAL and SPECIFIC questions; Figure 4 contains some hierarchies that support the positive results of our crowdsourced evaluation. Top-k sampling: Similar to prior work (Fan et al., 2018; Holtzman et al., 2019), we notice that beam search often produces generic or repetitive beams (Table 5). Even though the top-k scheme always produces lower-probable questions than beam search, our filtering system prefers a top-k question 49.5% of the time. 5.2 What kind of mistakes does it make? We describe the various types of errors our model makes in this section, using the Paul Weston SQUASH output in Figure 5 as a running example. Additionally, we list some modeling approaches we tried that did not work in Appendix C. Reliance on a flawed answering system: Our pipeline’s output is tied to the quality of the pretrained answering module, which both filters out questions and produces final answers. QuAC has long answer spans (Choi et al., 2018) that cause low-precision predictions with extra information (e.g., “Who was born in Springfield?”). Additionally, the answering module occasionally swaps two named entities present in the paragraph.19 Redundant information and lack of discourse: In our system, each QA pair is generated independently of all the others. Hence, our outputs lack an inter-question discourse structure. Our system often produces a pair of redundant SPECIFIC questions where the text of one question answers the 19For instance in the sentence “The Carpenter siblings were born in New Haven, to Harold B. and Agnes R.” the model incorrectly answers the question “Who was born in New Haven?” as “Harold B. and Agnes R.” other (e.g., “Who was born in Springfield?” vs. “Where was Weston born?”). These errors can likely be corrected by conditioning the generation module on previously-produced questions (or additional filtering); we leave this to future work. Lack of world knowledge: Our models lack commonsense knowledge (“How old was Weston when he was born?”) and can misinterpret polysemous words. Integrating pretrained contextualized embeddings (Peters et al., 2018) into our pipeline is one potential solution. Multiple GENERAL QA per paragraph: Our system often produces more than one tree per paragraph, which is undesirable for short, focused paragraphs with a single topic sentence. To improve the user experience, it might be ideal to restrict the number of GENERAL questions we show per paragraph. While we found it difficult to generate GENERAL questions representative of entire paragraphs (Appendix C), a potential solution could involve identifying and generating questions from topic sentences. Coreferences in GENERAL questions: Many generated GENERAL questions contain coreferences due to contextual nature of the QuAC and CoQA training data (“How did he get into music?”). Potential solutions could involve either constrained decoding to avoid beams with anaphoric expressions or using the CorefNQG model of Du and Cardie (2018). 5.3 Which models did not work? We present modelling approaches which did not work in Appendix C. This includes, i) end-toend modelling to generate sequences of questions using QuAC, ii) span selection NER system, iii) generation of GENERAL questions representative of entire paragraphs, iv) answering system trained on the combination of QuAC, CoQA and SQuAD. 6 Related Work Our work on SQUASH is related to research in three broad areas: question generation, information retrieval and summarization. Question Generation: Our work builds upon neural question generation systems (Du et al., 2017; Du and Cardie, 2018). Our work conditions generation on specificity, similar to difficultyconditioned question generation (Gao et al., 2018). QA pair generation has previously been used for 2329 dataset creation (Serban et al., 2016; Du and Cardie, 2018). Joint modeling of question generation and answering has improved the performance of individual components (Tang et al., 2017; Wang et al., 2017; Sachan and Xing, 2018) and enabled visual dialog generation (Jain et al., 2018). Information Retrieval: Our hierarchies are related to interactive retrieval setting (Hardtke et al., 2009; Brandt et al., 2011) where similar webpages are grouped together. SQUASH is also related to exploratory (Marchionini, 2006) and faceted search (Yee et al., 2003). Summarization: Our work is related to queryfocused summarization (Dang, 2005; Baumel et al., 2018) which conditions an output summary on an input query. Hierarchies have also been applied to summarization (Christensen et al., 2014; Zhang et al., 2017; Tauchmann et al., 2018). 7 Future Work While Section 5.2 focused on shortcomings in our modeling process and steps to fix them, this section focuses on broader guidelines for future work involving the SQUASH format and its associated text generation task. Evaluation of the SQUASH format: As discussed in Section 1, previous research shows support for the usefulness of hierarchies and QA in pedagogical applications. We did not directly evaluate this claim in the context of SQUASH, focusing instead on evaluating the quality of QA pairs and their hierarchies. Moving forward, careful user studies are needed to evaluate the efficacy of the SQUASH format in pedagogical applications, which might be heavily domain-dependent; for example, a QA hierarchy for a research paper is likely to be more useful to an end user than a QA hierarchy for an online blog. An important caveat is the imperfection of modern text generation systems, which might cause users to prefer the original human-written document over a generated SQUASH output. One possible solution is a three-way comparison between the original document, a human-written SQUASHed document, and a system-generated output. For fair comparison, care should be taken to prevent experimenter bias while crowdsourcing QA hierarchies (e.g., by maintaining similar text complexity in the two human-written formats). Collection of a SQUASH dataset: Besides measuring the usefulness of the QA hierarchies, a large dedicated dataset can help to facilitate endto-end modeling. While asking human annotators to write full SQUASHed documents will be expensive, a more practical option is to ask them to pair GENERAL and SPECIFIC questions in our dataset to form meaningful hierarchies and write extra questions whenever no such pair exists. QA budget and deeper specificity hierarchies: In our work, we generate questions for every sentence and filter bad questions with fixed thresholds. An alternative formulation is an adaptive model dependent on a user-specified QA budget, akin to “target length” in summarization systems, which would allow end users to balance coverage and brevity themselves. A related modification is increasing the depth of the hierarchies. While two-level QA trees are likely sufficient for documents structured into short and focused paragraphs, deeper hierarchies can be useful for long unstructured chunks of text. Users can control this property via a “maximum children per QA node” hyperparameter, which along with the QA budget will determine the final depth of the hierarchy. 8 Conclusion We propose SQUASH, a novel text generation task which converts a document into a hierarchy of QA pairs. We present and evaluate a system which leverages existing reading comprehension datasets to attempt solving this task. We believe SQUASH is a challenging text generation task and we hope the community finds it useful to benchmark systems built for document understanding, question generation and question answering. Additionally, we hope that our specificity-labeled reading comprehension dataset is useful in other applications such as 1) finer control over question generation systems used in education applications, curiositydriven chatbots and healthcare (Du et al., 2017). Acknowledgements We thank the anonymous reviewers for their insightful comments. In addition, we thank Nader Akoury, Ari Kobren, Tu Vu and the other members of the UMass NLP group for helpful comments on earlier drafts of the paper and suggestions on the paper’s presentation. This work was supported in part by research awards from the Allen Institute for Artificial Intelligence and Adobe Research. 2330 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. International Conference on Learning Representations (ICLR). Tal Baumel, Matan Eyal, and Michael Elhadad. 2018. Query focused abstractive summarization: Incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models. arXiv preprint arXiv:1801.07704. Christina Brandt, Thorsten Joachims, Yisong Yue, and Jacob Bank. 2011. Dynamic ranked retrieval. In Proceedings of the fourth ACM international conference on Web search and data mining. Orkut Buyukkokten, Hector Garcia-Molina, and Andreas Paepcke. 2001. Seeing the whole in parts: text summarization for web browsing on handheld devices. In Proceedings of the 10th international conference on World Wide Web. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP). Janara Christensen, Stephen Soderland, Gagan Bansal, et al. 2014. Hierarchical summarization: Scaling up multi-document summarization. In Proc. Association for Computational Linguistics (ACL). Hoa Trang Dang. 2005. Overview of duc 2005. In Document Understanding Conferences. Thomas H Davenport, David W De Long, and Michael C Beers. 1998. Successful knowledge management projects. Sloan management review, 39(2):43–57. Xinya Du and Claire Cardie. 2017. Identifying where to focus in reading comprehension for neural question generation. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP). Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from wikipedia. In Proc. Association for Computational Linguistics (ACL). Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proc. Association for Computational Linguistics (ACL). Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proc. Association for Computational Linguistics (ACL). Yifan Gao, Jianan Wang, Lidong Bing, Irwin King, and Michael R Lyu. 2018. Difficulty controllable question generation for reading comprehension. arXiv preprint arXiv:1807.03586. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640. Tom Gruber. 2008. Collective knowledge systems: Where the social web meets the semantic web. Web semantics: science, services and agents on the World Wide Web, 6(1). Kai Hakkarainen and Matti Sintonen. 2002. The interrogative model of inquiry and computer-supported collaborative learning. Science & Education, 11(1):25–43. David Hardtke, Mike Wertheim, and Mark Cramer. 2009. Demonstration of improved search result relevancy using real-time implicit relevance feedback. Understanding the User-Logging and Interpreting User Interactions in Information Search and Retrieval (UIIR-2009). Michael Heilman and Noah A Smith. 2010. Good question! statistical ranking for question generation. In Proc. Conference of the North American Chapter of the Association for Computational Linguistics – Human Language Technologies (NAACL HLT). Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. 2018. Deep reinforcement learning that matters. In Proc. Association for the Advancement of Artificial Intelligence (AAAI). Jaakko Hintikka. 1981. The logic of informationseeking dialogues: A model. Werner Becker and Wilhelm K. Essler Konzepte der Dialektik, pages 212–231. Jaakko Hintikka. 1988. What is the logic of experimental inquiry? Synthese, 74(2):173–190. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proc. Association for Computational Linguistics (ACL). Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. Unnat Jain, Svetlana Lazebnik, and Alexander G Schwing. 2018. Two can play this game: visual dialog with discriminative question generation and answering. In Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR). 2331 Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. 2018. Improving neural question generation using answer separation. arXiv preprint arXiv:1809.02393. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP). Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proc. International Conference on Learning Representations (ICLR). J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. biometrics, pages 159–174. Wendy G Lehnert. 1978. The process of question answering: A computer simulation of cognition, volume 978. Lawrence Erlbaum Hillsdale, NJ. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP). Gary Marchionini. 2006. Exploratory search: from finding to understanding. Communications of the ACM, 49(4):41–46. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS 2017 Autodiff Workshop. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. Conference of the North American Chapter of the Association for Computational Linguistics – Human Language Technologies (NAACL HLT). Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In Proc. Association for Computational Linguistics (ACL). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing, pages 2383– 2392. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Mrinmaya Sachan and Eric Xing. 2018. Self-training for jointly learning to ask and answer questions. In Proc. Conference of the North American Chapter of the Association for Computational Linguistics – Human Language Technologies (NAACL HLT). Roger C Schank. 1972. Conceptual dependency: A theory of natural language understanding. Cognitive psychology, 3(4):552–631. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proc. Association for Computational Linguistics (ACL). Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. Association for Computational Linguistics (ACL). Iulian Vlad Serban, Alberto García-Durán, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. In Proc. Association for Computational Linguistics (ACL). Manfred Stede and David Schlangen. 2004. Information-seeking chat: Dialogues driven by topic-structure. In Proceedings of Catalog (the 8th workshop on the semantics and pragmatics of dialogue; SemDial04). Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027. Christopher Tauchmann, Thomas Arnold, Andreas Hanselowski, Christian M Meyer, and Margot Mieskes. 2018. Beyond generic summarization: A multi-faceted hierarchical summarization corpus of large heterogeneous data. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC). Christian Wagner. 2004. Wiki: A technology for conversational knowledge management and group collaboration. Communications of the association for information systems, 13(1):19. Christian Wagner and Narasimha Bolloju. 2005. Supporting knowledge management in organizations with conversational technologies: Discussion forums, weblogs, and wikis. Journal of Database Management, 16(2). Tong Wang, Xingdi Yuan, and Adam Trischler. 2017. A joint model for question answering and question generation. arXiv preprint arXiv:1706.01450. Mark Yatskar. 2019. A qualitative comparison of coqa, squad 2.0 and quac. Proc. Human Language Technology/Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL). Ka-Ping Yee, Kirsten Swearingen, Kevin Li, and Marti Hearst. 2003. Faceted metadata for image search and browsing. In Proceedings of the SIGCHI conference on Human factors in computing systems. 2332 Amy X. Zhang, Lea Verou, and David Karger. 2017. Wikum: Bridging discussion forums and wikis using recursive summarization. In Conference on Computer Supported Cooperative Work and Social Computing (CSCW). 2333 Appendix A Question Classification Details Confirming our intuition, Table 2 shows us that QuAC has the highest percentage of GENERAL questions. On the other hand CoQA and SQuAD, which allowed the question-asker to look at the passage, are dominated by SPECIFIC questions. These findings are consistent with a comparison across the three datasets in Yatskar (2019). Interestingly, the average answer length for SPECIFIC questions in QuAC is 12 tokens, compared to 17 tokens for GENERAL questions. We provide the exact distribution of rule-labeled, hand-labeled and classifier-labeled questions in Table A1. B Hyperparameters for Question Generation Our question generation system consists of a two layer bidirectional LSTM encoder and a unidirectional LSTM decoder respectively. The LSTM hidden unit size in each direction and token embedding size is each set to 512. The class specificity embeddings size is 16. Embeddings are shared between the paragraph encoder and question decoder. All attention computations use a bilinear product (Luong et al., 2015). A dropout of 0.5 is used between LSTM layers. Models are trained using Adam (Kingma and Ba, 2014) with a learning rate of 10−3, with a gradient clipping of 5.0 and minibatch size 32. Early stopping on validation perplexity is used to choose the best question generation model. C What did not work? End-to-End Sequential Generation. We experimented with an end-to-end neural model which generated a sequence of questions given a sequence of answer spans. As training data, we leveraged the sequence IDs and follow-up information in the QuAC dataset, without specificity labels. We noticed that during decoding the model rarely attended over the history and often produced questions irrelevant to the context. A potential future direction would involve using the specificity labels for an end-to-end model. Span Selection NER system. As discussed in Section 3.1 and Du and Cardie (2017), we could frame answer span selection as a sequence labelling problem. We experimented with the NER system in AllenNLP (with ELMo embeddings) on the QuAC dataset, after the ground truth answer spans marked with BIO tags, after overlapping answers were merged together. We recorded low F1 scores of 33.3 and 15.6 on sentence-level and paragraph-level input respectively. Paragraph-level question generation. Our question generation model rarely generated GENERAL questions representative of the entire paragraph, even when we fed the entire paragraph as the answer span. We noticed that most GENERAL questions in our dataset were answered by one or two sentences in the paragraph. Answering system trained on all datasets. Recently, Yatskar (2019) reported small improvements on the QuAC validation set by pre-training the BiDAF++ model on SQuAD 2.0 or CoQA. We tried combining the training data in all three datasets but achieved a validation F1 score of just 29.3 (compared to 50.2 after using just QuAC training data). Dataset Size Rule Hand CNN SQuAD 86.8k 30.5% 0.0% 69.5% QuAC 65.2k 59.3% 1.5% 39.2% CoQA 105.6k 57.1% 0.1% 42.8% All 257.6k 48.7% 0.4% 50.9% Table A1: Distribution of scheme adopted to classify questions in different datasets. “CNN” refers to the data-driven classifier. Roughly half the questions were classified using the rules described in Table 1. 2334 Before the final of the 100-meter butterfly, US born Serbian swimmer Milorad Cavic caused a minor stir when he said it would be "good" if Phelps lost. "It'd be good for him if he loses. It would be nice if historians talk about Michael Phelps winning seven gold medals and losing the eighth to 'some guy.' I'd like to be that guy", Cavic said. Phelps responded, "When people say things like that, it fires me up more than anything." On August 16, Phelps won his seventh gold medal of the Games in the men's 100-meter butterfly, setting an Olympic record for the event with a time of 50.58 seconds and edging out his nearest competitor Cavic, by one hundredth (0.01) of a second.  Q. Why was he lost? A. "It'd be good for him if he loses Q. What did Phelps do on August 16? A. On August 16, Phelps won his seventh gold medal of the Games in the men's 100meter butterfly, Q.Who did he win against? A. 100-meter butterfly, Q. Who is the Serbian swimmer? A. US born Serbian swimmer Milorad Cavic Q. Who did he lose? A. Milorad Cavic Q. When did he win a medal? A. On August 16 Q. How many gold medals did he win? A. Phelps won his seventh gold medal of the Games in the men's 100-meter butterfly, Q. Who did he beat? A. edging out his nearest competitor Cavic, by one hundredth (0.01) of a second. On February 25, 2003 Converge released their first official DVD, The Long Road Home. The DVD is modeled after band home videos such as Metallica's Cliff Em' All release. Deathwish Inc describes the DVD as a "two disc collection that is as energetic and exciting as the moments the release captures". The DVD also comes with a bonus disk that included three full live sets from the band. Q. What did they do in 2003? A. On February 25, 2003 Converge released their first official DVD, The Long Road Home. Q. What was their first DVD? A. On February 25, 2003 Converge released their first official DVD, The Long Road Home. Q. When did they release this? A. On February 25, 2003 Q. Where were the release? A. The Long Road Home. Q.What was the DVD about? A. The DVD is modeled after band home videos such as Metallica's Cliff Em' All release. Q. What other videos did they have? A. Metallica's Cliff Em' All release. Q. How many sets were from the band? A. three full live sets from the bad Q. What is Deathwise Inc? A. Deathwish Inc describes the DVD as a "two disc collection that is as energetic and exciting as the moments the release captures". Converge (band) Michael Phelps Orson Welles Breaking with the Federal Theatre Project in 1937, Welles and Houseman founded their own repertory company, which they called the Mercury Theatre. The name was inspired by the title of the iconoclastic magazine, The American Mercury. Welles was executive producer, and the original company included such actors as Joseph Cotten, George Coulouris, Geraldine Fitzgerald, Arlene Francis, Martin Gabel, John Hoyt, Norman Lloyd, Vincent Price, Stefan Schnabel and Hiram Sherman. Q. What is Mercury Theatre? A. Breaking with the Federal Theatre Project in 1937, Welles and Houseman founded their own repertory company, which they called the Mercury Theatre. Q. What company did they form? A. Federal Theatre Project Q. When was the Federal Theatre Project founded? A. 1937, Q. Who started it? A. Welles and Houseman founded their own repertory company, which they called the Mercury Theatre. Q. Why was it called the Federal Theatre? A. The name was inspired by the title of the iconoclastic magazine, The American Mercury. Q. What was the name of the iconoclastic magazine? A. The American Mercury Q. Who was the producer? A. Welles was executive producer, and the original company included such actors as Joseph Cotten, George Coulouris, Geraldine Fitzgerald, Arlene Francis, Martin Gabel, Figure A1: Three SQUASH outputs generated by our system, showcasing the strengths and weaknesses described in Section 5.
2019
224
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2335–2345 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2335 Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction Kosuke Nishida1, Kyosuke Nishida1, Masaaki Nagata2, Atsushi Otsuka1, Itsumi Saito1, Hisako Asano1, Junji Tomita1 1 NTT Media Intelligence Laboratories, NTT Corporation 2 NTT Communication Science Laboratories, NTT Corporation [email protected] Abstract Question answering (QA) using textual sources for purposes such as reading comprehension (RC) has attracted much attention. This study focuses on the task of explainable multi-hop QA, which requires the system to return the answer with evidence sentences by reasoning and gathering disjoint pieces of the reference texts. It proposes the Query Focused Extractor (QFE) model for evidence extraction and uses multi-task learning with the QA model. QFE is inspired by extractive summarization models; compared with the existing method, which extracts each evidence sentence independently, it sequentially extracts evidence sentences by using an RNN with an attention mechanism on the question sentence. It enables QFE to consider the dependency among the evidence sentences and cover important information in the question sentence. Experimental results show that QFE with a simple RC baseline model achieves a state-of-the-art evidence extraction score on HotpotQA. Although designed for RC, it also achieves a state-of-the-art evidence extraction score on FEVER, which is a recognizing textual entailment task on a large textual database. 1 Introduction Reading comprehension (RC) is a task that uses textual sources to answer any question. It has seen significant progress since the publication of numerous datasets such as SQuAD (Rajpurkar et al., 2016). To achieve the goal of RC, systems must be able to reason over disjoint pieces of information in the reference texts. Recently, multi-hop question answering (QA) datasets focusing on this capability, such as QAngaroo (Welbl et al., 2018) and HotpotQA (Yang et al., 2018), have been released. Multi-hop QA faces two challenges. The first is the difficulty of reasoning. It is difficult for the Figure 1: Concept of explainable multi-hop QA. Given a question and multiple textual sources, the system extracts evidence sentences from the sources and returns the answer and the evidence. system to find the disjoint pieces of information as evidence and reason using the multiple pieces of such evidence. The second challenge is interpretability. The evidence used to reason is not necessarily located close to the answer, so it is difficult for users to verify the answer. Yang et al. (2018) released HotpotQA, an explainable multi-hop QA dataset, as shown in Figure 1. Hotpot QA provides the evidence sentences of the answer for supervised learning. The evidence extraction in multi-hop QA is more difficult than that in other QA problems because the question itself may not provide a clue for finding evidence sentences. As shown in Figure 1, the system finds an evidence sentence (Evidence 2) by relying on another evidence sentence (Evidence 1). The capability of being able to explicitly extract evidence is an advance towards meeting the above two challenges. Here, we propose a Query Focused Extractor (QFE) that is based on a summarization model. We regard the evidence extraction of the explainable multi-hop QA as a query-focused summarization task. Query-focused summarization is the task of summarizing the source document with regard to the given query. QFE sequentially extracts the evidence sentences by using an RNN with 2336 an attention mechanism on the question sentence, while the existing method extracts each evidence sentence independently. This query-aware recurrent structure enables QFE to consider the dependency among the evidence sentences and cover the important information in the question sentence. Our overall model uses multi-task learning with a QA model for answer selection and QFE for evidence extraction. The multi-task learning with QFE is general in the sense that it can be combined with any QA model. Moreover, we find that the recognizing textual entailment (RTE) task on a large textual database, FEVER (Thorne et al., 2018), can be regarded as an explainable multi-hop QA task. We confirm that QFE effectively extracts the evidence both on HotpotQA for RC and on FEVER for RTE. Our main contributions are as follows. • We propose QFE for explainable multi-hop QA. We use the multi-task learning of the QA model for answer selection and QFE for evidence extraction. • QFE adaptively determines the number of evidence sentences by considering the dependency among the evidence sentences and the coverage of the question. • QFE achieves state-of-the-art performance on both HotpotQA and FEVER in terms of the evidence extraction score and comparable performance to competitive models in terms of the answer selection score. QFE is the first model that outperformed the baseline on HotpotQA. 2 Task Definition Here, we re-define explainable multi-hop QA so that it includes the RC and the RTE tasks. Def. 1. Explainable Multi-hop QA Input: Context C (multiple texts), Query Q (text) Output: Answer Type AT (label), Answer String AS (text), Evidence E (multiple texts) The Context C is regarded as one connected text in the model. If the connected C is too long (e.g. over 2000 words), it is truncated. The Query Q is the query. The model answers Q with an answer type AT or an answer string AS. The Answer Type AT is selected from the answer candidates, such as ‘Yes’. The answer candidates depend on the task setting. The Answer String AS Figure 2: Overall model architecture. The answer layer is the version for the RC task. exists only if there are not enough answer candidates to answer Q. The answer string AS is a short span in C. Evidence E consists of the sentences in C and is required to answer Q. For RC, we tackle HotpotQA. In HotpotQA, the answer candidates are ‘Yes’, ‘No’, and ‘Span’. The answer string AS exists if and only if the answer type AT is ‘Span’. C consists of ten Wikipedia paragraphs. The evidence E consists of two or more sentences in C. For RTE, we tackle FEVER. In FEVER, the answer candidates are ‘Supports’, ‘Refutes’, and ‘Not Enough Info’. The answer string AS does not exist. C is the Wikipedia database. The evidence E consists of the sentences in C. 3 Proposed Method This section first explains the overall model architecture, which contains our model as a module, and then the details of our QFE. 3.1 Model Architecture Except for the evidence layer, our model is the same as the baseline (Clark and Gardner, 2018) used in HotpotQA (Yang et al., 2018). Figure 2 shows the model architecture. The input of the model is the context C and the query Q. The model has the following layers. The Word Embedding Layer encodes C and Q as sequences of word vectors. A word vector is the concatenation of a pre-trained word embedding and a character-based embedding obtained using a CNN (Kim, 2014). The outputs are C1 ∈ 2337 Rlw×dw, Q1 ∈Rmw×dw, where lw is the length (in words) of C, mw is the length of Q and dw is the size of the word vector. The Context Layer encodes C1, Q1 as contextual vectors C2 ∈Rlw×2dc, Q2 ∈Rmw×2dc by using a bi-directional RNN (Bi-RNN), where dc is the output size of a uni-directional RNN. The Matching Layer encodes C2, Q2 as matching vectors C3 ∈Rlw×dc by using bi-directional attention (Seo et al., 2017), a Bi-RNN, and selfattention (Wang et al., 2017). The Evidence Layer first encodes C3 as [−→ C4; ←− C4] ∈Rlw×2dc by a Bi-RNN. Let j1(i) be the index of the first word of the i-th sentence in C and j2(i) be the index of the last word. We define the vector of the i-th sentence as: xi = [−−−→ c4,j2(i); ←−−− c4,j1(i)] ∈R2dc. Here, X ∈Rls×2dc is the sentence-level context vectors, where ls is the number of sentences of C. QFE, described later, receives sentence-level context vectors X ∈Rls×2dc and the contextual query vectors Q2 ∈Rmw×2dc as Y. QFE outputs the probability distribution that the i-th sentence is the evidence: Pr(i) = QFE(X, Y = Q2). (1) Then, the evidence layer concatenates the wordlevel vectors and the sentence-level vectors: c5,j = [c3,j; xi(j)] ∈R3dc, where the j-th word in C is included in the i(j)-th sentence in C. The Answer Layer predicts the answer type AT and the answer string AS from C5. The layer has stacked Bi-RNNs. The output of each Bi-RNN is mapped to the probability distribution by the fully connected layer and the softmax function. For RC, the layer has three stacked Bi-RNNs. Each probability indicates the start of the answer string, ˆAS1 ∈Rlw, the end of the answer string ˆAS2 ∈Rlw, and the answer type, ˆAT ∈R3. For RTE, the layer has one Bi-RNN. The probability indicates the answer type.      Extraction RNN        Glimpse Sentence Vectors  Query Vectors  Figure 3: Overview of Query Focused Extractor at step t. zt is the current summarization vector. gt is the query vector considering the current summarization. et is the extracted sentence. xet updates the RNN state. Loss Function: Our model uses multi-task learning with a loss function L = LA+LE, where LA is the loss of the answer and LE is the loss of the evidence. The answer loss LA is the sum of the cross-entropy losses for all probability distributions obtained by the answer layer. The evidence loss LE is defined in subsection 3.3. 3.2 Query Focused Extractor Query Focused Extractor (QFE) is shown as the red box in Figure 2. QFE is an extension of the extractive summarization model of Chen and Bansal (2018), which is not for query-focused settings. Chen and Bansal used an attention mechanism to extract sentences from the source document such that the summary would cover the important information in the source document. To focus on the query, QFE extracts sentences from C with attention on Q such that the evidence covers the important information with respect to Q. Figure 3 shows an overview of QFE. The inputs of QFE are the sentence-level context vectors X ∈Rls×2dc and contextual query vectors Y ∈Rmw×2dc. We define the timestep to be the operation to extract a sentence. QFE updates the state of the RNN (the dark blue box in Figure 3) as follows: zt = RNN(zt−1, xet) ∈R2dc, where et ∈{1, · · · , ls} is the index of the sentence extracted at step t. We define Et = {e1, · · · , et} to be the set of sentences extracted until step t. QFE extracts the i-th sentence according to the probability distribution (the light blue box): Pr(i; Et−1) = softmaxi(ut i) ut i =      v⊤ p tanh(Wp1xi + Wp2gt + Wp3zt) (i ̸∈Et−1) −∞ (otherwise) . Then, QFE selects et = argmax Pr(i; Et−1). 2338 Let gt be a query vector considering the importance at step t. We define gt as the glimpse vector (Vinyals et al., 2016) (the green box): gt = X j αt jWg1yj ∈R2dc αt = softmax(at) ∈Rmw at j = v⊤ g tanh(Wg1yj + Wg2zt). The initial state of the RNN is the vector obtained via the fully connected layer and the max pooling from X. All parameters W· ∈R2dc×2dc and v· ∈R2dc are trainable. 3.3 Training Phase In the training phase, we use teacher-forcing to make the loss function. The loss of the evidence LE is the negative log likelihood regularized by a coverage mechanism (See et al., 2017): LE = − |E| X t=1 log  max i∈E\Et−1 Pr(i; Et−1)  + X i min(ct i, αt i). The max operation in the first term enables the sentence with the highest probability to be extracted. This operation means that QFE extracts the sentences in the predicted importance order. On the other hand, the evidence does not have the ground truth order in which it is to be extracted, so the loss function ignores the order of the evidence sentences. The coverage vector ct is defined as ct = Pt−1 τ=1 ατ. In order to learn the terminal condition of the extraction, QFE adds a dummy sentence, called the EOE sentence, to the sentence set. When the EOE sentence is extracted, QFE terminates the extraction. The EOE sentence vector xEOE ∈R2dc is a trainable parameter in the model, so xEOE is independent of the samples. We train the model to extract the EOE sentence after all evidence. 3.4 Test Phase In the test phase, QFE terminates the extraction by reaching the EOE sentence. The predicted evidence is defined as ˆE = argmin ( −1 | ˆE| X t log max i̸∈ˆEt−1 Pr(i; ˆEt−1) ) , where ˆEt is the predicted evidence until step t. QFE uses the beam search algorithm to search ˆE. Context Query Evidence # paragraphs # words # words # sentences Ave. 10.0 1162.0 17.8 2.4 Max 10 3079 59 8 Median 10 1142 17 2 Min 2 60 7 2 Table 1: Statistics of HotpotQA (the development set in the distractor setting). 4 Experiments on RC 4.1 HotpotQA Dataset In HotpotQA, the query Q is created by crowd workers, on the condition that answering Q requires reasoning over two paragraphs in Wikipedia. The candidates of AT are ‘Yes’, ‘No’, and ‘Span’. The answer string AS, if it exists, is a span in the two paragraphs. The context C is ten paragraphs, and its content has two settings. In the distractor setting, C consists of the two gold paragraphs used to create Q and eight paragraphs retrieved from Wikipedia by using TF-IDF with Q. Table 1 shows the statistics of the distractor setting. In the fullwiki setting, all ten paragraphs of C are retrieved paragraphs. Hence, C may not include two gold paragraphs, and in that case, AS and E cannot be extracted. Therefore, the oracle model does not achieve 100 % accuracy. HotpotQA does not provide the training data for the fullwiki setting, and the training data in the fullwiki setting is the same as the distractor setting. 4.2 Experimental Setup Comparison models Our baseline model is the same as the baseline in Yang et al. (2018) except as follows. Whereas we use equation (1), they use Pr(i) = sigmoid(w⊤xi + b), where w ∈R2dc, b ∈R are trainable parameters. The evidence loss LE is the sum of binary cross-entropy functions on whether each of the sentences is evidence or not. In the test phase, the sentences with probabilities higher than a threshold are selected. We set the threshold to 0.4 because it gave the highest F1 score on the development set. The remaining parts of the implementations of our and baseline models are the same. The details are in Appendix A.1. We also compared DFGN + BERT (Xiao et al., 2019), Cognitive Graph (Ding et al., 2019), GRN and BERT Plus, which were unpublished at the submission time (4 March 2019). 2339 Answer Evidence Joint EM F1 EM F1 EM F1 Baseline 45.6 59.0 20.3 64.5 10.8 40.2 BERT Plus 56.0 69.9 42.3 80.6 26.9 58.1 DFGN + BERT 55.2 68.5 49.9 81.1 31.9 58.2 GRN 52.9 66.7 52.4 84.1 31.8 58.5 QFE 53.9 68.1 57.8 84.5 34.6 59.6 Table 2: Performance of the models on the HotpotQA distractor setting leaderboard1 (4 March 2019). The models except for the baseline were unpublished at the time of submission of this paper. Our model was submitted on 21 November 2018, three months before the other submissions. Answer Evidence Joint EM F1 EM F1 EM F1 Baseline 24.0 32.9 3.86 37.7 1.85 16.2 GRN 27.3 36.5 12.2 48.8 7.40 23.6 Cognitive Graph 37.1 48.9 22.8 57.8 12.4 34.9 QFE 28.7 38.1 14.2 44.4 8.69 23.1 Table 3: Performance of the models on the HotpotQA fullwiki setting leaderboard1 (4 March 2019). The models except for the baseline were unpublished at the time of submission of this paper. Our model was submitted on 25 November 2018, three months before the other submissions. Evaluation metrics We evaluated the prediction of AT , AS and E by using the official metrics in HotpotQA. Exact match (EM) and partial match (F1) were used to evaluate both the answer and the evidence. For the answer evaluation, the score was measured by the classification accuracy of AT . Only when AT was ‘Span’ was the score also measured by the word-level matching of AS. For the evidence, the partial match was evaluated by the sentence ids, so word-level partial matches were not considered. For metrics on both the answer and the evidence, we used Joint EM and Joint F1 (Yang et al., 2018). 4.3 Results Does our model achieve state-of-the-art performance? Table 2 shows that, in the distractor setting, QFE performed the best in terms of the evidence extraction score among all models compared. It also achieved comparable performance in terms of the answer selection score and therefore achieved state-of-the-art performance on the joint EM and F1 metrics, which are the main metric on the dataset. QFE outperformed the baseline model in all metrics. Although our model does not use any pre-trained language model such as Answer Evidence Joint EM F1 EM F1 EM F1 Yang et al. (2018) 44.4 58.3 22.0 66.7 11.6 40.9 our implementation2 52.7 67.3 38.0 78.4 21.9 54.9 + top 2 extraction 52.7 67.3 48.0 77.8 27.6 54.4 QFE 53.7 68.7 58.8 84.7 35.4 60.6 without glimpse 53.1 67.9 58.4 84.3 34.8 59.6 pipeline model 46.9 63.6 – – – – Table 4: Performance of our models and the baseline models on the development set in the distractor setting. BERT (Devlin et al., 2019) for encoding, it outperformed the methods that used BERT such as DFGN + BERT and BERT Plus. In particular, the improvement in the evidence EM score was +37.5 points against the baseline and +5.4 points against GRN. In the fullwiki setting, Table 3 shows that QFE outperformed the baseline in all metrics. Compared with the unpublished model at the submission time, Cognitive Graph (Ding et al., 2019) outperformed our model. There is a dataset shift problem (Quionero-Candela et al., 2009) in HotpotQA, where the distribution of the number of gold evidence sentences and the answerability differs between training (i.e., the distractor setting) and test (i.e., the fullwiki setting) phases. In the fullwiki setting, the questions may have less than two gold evidence sentences or be even unanswerable. Our current QA and QFE models do not consider solving the dataset shift problem; our future work will deal with it. Does QFE contribute to the performance? Table 4 shows the results of the ablation study. QFE performed the best among the models compared. Although the difference between our overall model and the baseline is the evidence extraction model, the answer scores also improved. QFE also outperformed the model that used only RNN extraction without glimpse. QFE defines the terminal condition as reaching the EOE sentence, which we call adaptive termination. We confirmed that the adaptive termination of QFE contributed to its performance. We compared QFE with a baseline that extracts the two sentences with the highest scores, since the most frequent number of evidence sentences is two. QFE outperformed this baseline. 1https://hotpotqa.github.io/ 2The differences in score among the original and our implementations of Yang et al. (2018) are due to the hyper parameters. The main change is increasing dc from 50 to 150. 2340 Precision Recall Correlation baseline 79.0 82.4 0.259 QFE 88.4 83.2 0.375 Table 5: Performance of our model and the baseline in evidence extraction on the development set in the distractor setting. The correlation is the Kendall tau correlation of the number of predicted evidence sentences and that of gold evidence. -3 -2 -1 0 1 2 3 num. of predictions minus num. of gold sentences 0 1000 2000 3000 4000 5000 # samples Figure 4: Number of predicted evidence sentences minus the number of gold evidence sentences. Our model uses the results of evidence extraction as a guide for selecting the answer, but it is not a pipeline model of evidence extraction and answer selection. Therefore, we evaluated a pipeline model that selects the answer string AS only from the extracted evidence sentences, where the outputs of the answer layer corresponding to nonevidence sentences are masked with the prediction of the evidence extraction. Although almost all answer strings in the dataset are in the gold evidence sentences, the model performed poorly. We consider that the evidence extraction helps QA model to learn, but its performance is not enough to improve the performance of the answer layer with the pipeline model. What are the characteristics of our evidence extraction? Table 5 shows the evidence extraction performance in the distractor setting. Our model improves both precision and recall, and the improvement in precision is larger. Figure 4 reveals the reason for the high EM and precision scores; QFE rarely extracts too much evidence. That is, it predicts the number of evidence sentences more accurately than the baseline. Table 5 also shows the correlation of our model about the number of evidence sentences is higher than that of the baseline. We consider that the sequential extraction and the adaptive termination help to prevent overextraction. In contrast, the baseline evaluates each sentence independently, so the baseline often exAnswer Evidence # Evi # sample EM F1 Num EM P R F1 all 100 53.7 68.7 2.22 58.8 88.4 83.2 84.7 2 67.4 54.8 69.6 2.09 76.9 88.4 91.1 89.4 3 24.0 52.5 68.4 2.43 26.0 89.3 71.8 78.7 4 7.25 52.5 66.9 2.61 14.0 90.7 59.4 70.4 5 1.08 42.5 57.0 2.65 2.50 92.1 49.5 63.1 Table 6: Performance of our model in terms of the number of gold evidence sentences on the development set in the distractor setting. # sample, Num, P and R mean the proportion in the dataset, number of predicted evidence sentences, precision, and recall, respectively. Answer Evidence Joint EM F1 EM F1 EM F1 all 53.7 68.7 58.8 84.7 35.4 60.6 comparison 54.1 60.7 71.2 88.8 42.0 55.6 bridge 53.6 70.7 55.7 83.7 33.8 61.8 Table 7: Performance of our model for each reasoning type on the development set in the distractor setting. tracts too much evidence. What questions in HotpotQA are difficult for QFE? We analyzed the difficulty of the questions for QFE from the perspective of the number of evidence sentences and reasoning type; the results are in Table 6 and Table 7. First, we classified the questions by the number of gold evidence sentences. Table 6 shows the model performance for each number. The answer scores were low for the questions answered with five evidence sentences, which indicated that questions requiring much evidence are difficult. However, the five-evidence questions amount to only 80 samples, so this observation needs to be confirmed with more analysis. QFE performed well when the number of gold evidence sentences was two. Even though QFE was relatively conservative when extracting many evidence sentences, it was able to extract more than two sentences adaptively. Second, we should mention the reasoning types in Table 7. HotpotQA has two reasoning types: entity bridge and entity comparison. Entity bridge means that the question mentioned one entity and the article of this entity has another entity required for the answer. Entity comparison means that the question compares two entities. Table 7 shows that QFE works on each reasoning type. We consider that the difference between the results is due to the characteristics of the dataset. The answer F1 was relatively low in the comparison questions, because all yes/no 2341 Q: Which band has more members, Kitchens of Distinction or Royal Blood? AT = ˆAT : Kitchens of Distinction gold predicted probability[%] text ✓ 1 96.9 Kitchens of Distinction ... are an English three-person alternative rock band ... ✓ 2 0.2 →81.4 Royal Blood are an English rock duo formed in Brighton in 2013. 3 0.0 →0.0 →52.3 EOE sentence — 2.9 →16.8 →31.9 In September 2012, ... members ... as Kitchens of Distinction. — 0.0 →0.0 →0.0 Royal Blood is the eponymous debut studio album by British rock duo Royal Blood. Table 8: Outputs of QFE. The sentences are extracted in the order shown in the predicted column. The extraction scores of the sentences at each step are in the probability column. questions belong to the comparison question and partial matches do not happen in yes/no questions. The evidence EM was relatively high in the comparison questions. One of the reason is that 77.1 % of the comparison questions have just two evidence sentences. This proportion is larger than that in the bridge questions, 64.9%. From another perspective, the comparison question sentence itself will contain the clues (i.e., two entities) required to gather all evidence sentences, while the bridge question sentence itself will provide only a part of the clues and require multi-hop reasoning, i.e., finding an evidence sentence from another evidence sentence. Therefore, the evidence extraction of the bridge questions is more difficult than that of the comparison questions. Qualitative Analysis. Table 8 shows an example of the behavior of QFE. In it, the system must compare the number of members of Kitchens of Distinction and with those of Royal Blood. The system extracted the two sentences describing the number of members. Then, the system extracted the EOE sentence. We should note two sentences that were not extracted. The first sentence includes ‘members’ and ‘Kitchens of Distinction’, which are included in the query. However, this sentence does not mention the number of the members of Kitchens of Distinction. The second sentence also shows that Royal Blood is a duo. However, our model preferred Royal Blood (band name) to Royal Blood (album name) as the subject of the sentence. Other examples are shown in Appendix A.2. 5 Experiments on RTE 5.1 FEVER Dataset In FEVER, the query Q is created by crowd workers. Annotators are given a randomly sampled senContext Query Evidence # pages # words # sentences Ave. 5416537 9.60 1.13 Max — 39 52 Median — 9 1 Min — 3 0 Table 9: Statistics of FEVER (the development set). tence and a corresponding dictionary. The given sentence is from Wikipedia. The key-value of the corresponding dictionary consists of an entity and a description of the entity. Entities are those that have a hyperlink from the given sentence. The description is the first sentence of the entity’s Wikipedia page. Only using the information in the sentence and the dictionary, annotators create a claim as Q. The candidates of AT are ‘Supports’, ‘Refutes’ and ‘Not Enough Info (NEI)’. The proportion of samples with more than one evidence sentence is 27.3% in the samples whose label is not ‘NEI’. The context C is the Wikipedia database shared among all samples. Table 9 shows the statistics. 5.2 Experimental Setup Because C is large, we used the NSMN document retriever (Nie et al., 2019) and gave only the topfive paragraphs to our model. Similar to NSMN, in order to capture the semantic and numeric relationships, we used 30-dimensional WordNet features and five-dimensional number embeddings. The WordNet features are binaries reflecting the existence of hypernymy/antonymy words in the input. The number embedding is a real-valued embedding assigned to any unique number. Because the number of samples in the training data is biased on the answer type AT , randomly selected samples were copied in order to equalize the 2342 Evidence Answer FEVER F1 Acc. Nie et al. (2019) 53.0 68.2 64.2 Yoneda et al. (2018) 35.0 67.6 62.5 who 37.4 72.1 66.6 Kudo 36.8 70.6 65.7 avonamila 60.3 71.4 65.3 hz66pasa 71.4 33.3 22.0 aschern 70.4 69.3 60.9 QFE 77.7 69.3 61.8 Table 10: Performance of the models on the FEVER leaderboard3 (4 March 2019). The top two rows are the models submitted during the FEVER Shared Task that have higher FEVER scores than ours. The middle three rows are the top-three FEVER models submitted after the Shared Task. The rows next to the bottom and the bottom row (ours) show the top-three F1 models submitted after the Shared Task. None of the models submitted after the Shared Task has paper information. numbers. Our model used ensemble learning of 11 randomly initialized models. For the evidence extraction, we used the union of the predicted evidences of each model. If the model predicts AT as ‘Supports’ or ‘Refutes’, the model extracts at least one sentence. Details of the implementation are in Appendix A.1. We evaluated the prediction of AT and the evidence E by using the official metrics in FEVER. AT was evaluated in terms of the label accuracy. E was evaluated in terms of precision, recall and F1, which were measured by sentence id. The FEVER score was used as a metric accounting for both AT and E. The FEVER score of a sample is 1 if the predicted evidence includes all gold evidence and the answer is correct. That is, the FEVER score emphasizes the recall of extracting evidence sentences over the precision. 5.3 Results Does our multi-task learning approach achieve state-of-the-art performance? Table 10 shows QFE achieved state-of-the-art performance in terms of the evidence F1 and comparable performance in terms of label accuracy to the competitive models. The FEVER score of our model is lower than those of other models, because the FEVER score emphasizes recall. However, the importance of the precision and the recall depends on the utilization. QFE is suited to situations where concise output is preferred. 3https://competitions.codalab.org/competitions/18814 Precision Recall F1 Nie et al. (2019) 42.3 70.9 53.0 Yoneda et al. (2018) 22.2 82.8 35.0 Hanselowski et al. (2018) 23.6 85.2 37.0 Malon (2018) 92.2 50.0 64.9 QFE ensemble (test) 79.1 76.3 77.7 QFE single (dev) 90.8 64.9 76.6 QFE ensemble (dev) 83.9 78.1 81.0 Table 11: Performance of evidence extraction. The top five rows are evaluated on the test set. The comparison of our models is on the development set. The models submitted after the Shared Task have no information about precision or recall. What are the characteristics of our evidence extraction? Table 11 shows our model achieved high performance on all metrics of evidence extraction. On the test set, it ranked in 2nd place in precision, 3rd place in recall, and 1st place in F1. As for the results on the development set, QFE extracted with higher precision than recall. This tendency was the same as in the RC evaluation. The single model has a larger difference between precision and recall. The ensemble model improves recall and F1. Examples are shown in Appendix A.2. 6 Related Work 6.1 Reading Comprehension RC is performed by matching the context and the query (Seo et al., 2017). Many RC datasets referring to multiple texts have been published, such as MS MARCO (Nguyen et al., 2016) and TriviaQA (Joshi et al., 2017). For such datasets, the document retrieval model is combined with the contextquery matching model (Chen et al., 2017a; Wang et al., 2018a,b; Nishida et al., 2018). Some techniques have been proposed for understanding multiple texts. Clark and Gardner (2018) used simple methods, such as connecting texts. Choi et al. (2017); Zhong et al. (2019) proposed a combination of coarse reading and fine reading. However, Sugawara et al. (2018) indicated that most questions in RC require reasoning from just one sentence including the answer. The proportion of such questions is more than 63.2 % in TriviaQA and 86.2 % in MS MARCO. This observation is one of the motivations behind multi-hop QA. HotpotQA (Yang et al., 2018) is a task including supervised evidence extraction. QAngaroo (Welbl et al., 2018) is a task created by 2343 using Wikipedia entity links. The difference between QAngaroo and our focus is two-fold: (1) QAngaroo does not have supervised evidence and (2) the questions in QAngaroo are inherently limited because the dataset is constructed using a knowledge base. MultiRC (Khashabi et al., 2018) is also an explainable multi-hop QA dataset that provides gold evidence sentences. However, it is difficult to compare the performance of the evidence extraction with other studies because its evaluation script and leaderboard do not report the evidence extraction score. Because annotation of the evidence sentence is costly, unsupervised learning of the evidence extraction is another important issue. Wang et al. (2019) tackled unsupervised learning for explainable multi-hop QA, but their model is restricted to the multiple-choice setting. 6.2 Recognizing Textual Entailment RTE (Bowman et al., 2015; Williams et al., 2018) is performed by sentence matching (Rockt¨aschel et al., 2016; Chen et al., 2017b). FEVER (Thorne et al., 2018) has the aim of verification and fact checking for RTE on a large database. FEVER requires three sub tasks: document retrieval, evidence extraction, and answer prediction. In the previous work, the sub tasks are performed using pipelined models (Nie et al., 2019; Yoneda et al., 2018). In contrast, our approach performs evidence extraction and answer prediction simultaneously by regarding FEVER as an explainable multi-hop QA task. 6.3 Summarization A typical approach to sentence-level extractive summarization has an encoder-decoder architecture (Cheng and Lapata, 2016; Nallapati et al., 2017; Narayan et al., 2018). Sentence-level extractive summarization is also used for content selection in abstractive summarization (Chen and Bansal, 2018). The model extracts sentences in order of importance and edits them. We have extended this model so that it can be used for evidence extraction because we consider that the evidence must be extracted in order of importance rather than the original order, which the conventional models use. 7 Conclusion We consider that the main contributions of our study are (1) the QFE model that is based on a summarization model for the explainable multihop QA, (2) the dependency among the evidence and the coverage of the question due to the usage of the summarization model, and (3) the state-ofthe-art performance in evidence extraction in both RC and RTE tasks. Regarding RC, we confirmed that the architecture with QFE, which is a simple replacement of the baseline, achieved state-of-the-art performance in the task setting. The ablation study showed that the replacement of the evidence extraction model with QFE improves performance. Our adaptive termination contributes to the exact matching and the precision score of the evidence extraction. The difficulty of the questions for QFE depends on the number of the required evidence sentences. This study is the first to base its experimental discussion on HotpotQA. Regarding RTE, we confirmed that, compared with competing models, the architecture with QFE has a higher evidence extraction score and comparable label prediction score. This study is the first to show a joint approach for RC and FEVER. References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP, pages 632–642. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017a. Reading Wikipedia to Answer Open-Domain Questions. In ACL, pages 1870– 1879. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017b. Enhanced LSTM for natural language inference. In ACL, pages 1657– 1668. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In ACL, pages 675–686. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In ACL, pages 484–494. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In ACL, pages 209–220. 2344 Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In ACL, pages 845–855. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL-HLT. To appear. Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In ACL. To appear. Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. Ukp-athene: Multi-sentence textual entailment for claim verification. In FEVER@EMNLP, pages 103–108. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In ACL, pages 1601–1611. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In NAACLHLT, pages 252–262. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746– 1751. Christopher Malon. 2018. Team papelo: Transformer networks at fever. In FEVER@EMNLP, pages 109– 113. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In AAAI, pages 3075–3081. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In NAACL-HLT, pages 1747–1759. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In CoCo@NIPS. Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. In AAAI. To appear. Kyosuke Nishida, Itsumi Saito, Atsushi Otsuka, Hisako Asano, and Junji Tomita. 2018. Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension. In CIKM, pages 647–656. Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. 2009. Dataset Shift in Machine Learning. The MIT Press. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, pages 2383–2392. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In ICLR. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In ACL, pages 1073–1083. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. 2018. What makes reading comprehension questions easier? In EMNLP, pages 4208– 4219. James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018. The fact extraction and verification (FEVER) shared task. In FEVER@EMNLP, pages 1–9. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2016. Order matters: Sequence to sequence for sets. In ICLR. Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu, Dan Roth, and David McAllester. 2019. Evidence sentence extraction for machine reading comprehension. arXiv preprint arXiv:1902.08852. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018a. R3: Reinforced reader-ranker for open-domain question answering. In AAAI, pages 5981–5988. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2018b. Evidence aggregation for answer re-ranking in open-domain question answering. In ICLR. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In ACL, pages 189–198. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. TACL, 6:287–302. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT, pages 1112–1122. 2345 Yunxuan Xiao, Yanru Qu, Lin Qiu, Hao Zhou, Lei Li, Weinan Zhang, and Yong Yu. 2019. Dynamically fused graph network for multi-hop reasoning. In ACL. To appear. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP, pages 2369–2380. Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. UCL Machine Reading Group: Four factor framework for fact finding (HexaF). In FEVER@EMNLP, pages 97–102. Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. In ICLR.
2019
225
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2346–2357 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2346 Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension An Yang1*, Quan Wang2, Jing Liu2, Kai Liu2, Yajuan Lyu2, Hua Wu2†, Qiaoqiao She2 and Sujian Li1† 1Key Laboratory of Computational Linguistics, Peking University, MOE, China 2Baidu Inc., Beijing, China {yangan, lisujian}@pku.edu.cn {wangquan05, liujing46, liukai20, lvyajuan, wu hua, sheqiaoqiao}@baidu.com Abstract Machine reading comprehension (MRC) is a crucial and challenging task in NLP. Recently, pre-trained language models (LMs), especially BERT, have achieved remarkable success, presenting new state-of-the-art results in MRC. In this work, we investigate the potential of leveraging external knowledge bases (KBs) to further improve BERT for MRC. We introduce KT-NET, which employs an attention mechanism to adaptively select desired knowledge from KBs, and then fuses selected knowledge with BERT to enable context- and knowledgeaware predictions. We believe this would combine the merits of both deep LMs and curated KBs towards better MRC. Experimental results indicate that KT-NET offers significant and consistent improvements over BERT, outperforming competitive baselines on ReCoRD and SQuAD1.1 benchmarks. Notably, it ranks the 1st place on the ReCoRD leaderboard, and is also the best single model on the SQuAD1.1 leaderboard at the time of submission (March 4th, 2019).1 1 Introduction Machine reading comprehension (MRC), which requires machines to comprehend text and answer questions about it, is a crucial task in natural language processing. With the development of deep learning and the increasing availability of datasets (Rajpurkar et al., 2016, 2018; Nguyen et al., 2016; Joshi et al., 2017), MRC has achieved remarkable advancements in the last few years. Recently language model (LM) pre-training has caused a stir in the MRC community. These LMs *This work was done while the first author was an intern at Baidu Inc. †Co-corresponding authors: Hua Wu and Sujian Li. 1Our code will be available at http://github. com/paddlepaddle/models/tree/develop/ PaddleNLP/Research/ACL2019-KTNET Passage: The US government has extended its review into whether trade sanctions against Sudan should be repealed. [...] Sudan is committed to the full implementation of UN Security Council resolutions on North Korea. [...] Sudan’s past support for North Korea could present an obstacle [...] Question: Sudan remains a XXX-designated state sponsor of terror and is one of six countries subject to the Trump administration’s ban. Original BERT prediction: UN Security Council Prediction with background knowledge: US Background knowledge: NELL: (Donald Trump, person-leads-organization, US) WordNet: (government, same-synset-with, administration) WordNet: (sanctions, common-hypernym-with, ban) Figure 1: An example from ReCoRD, with answer candidates marked (underlined) in the passage. The vanilla BERT model fails to predict the correct answer. But it succeeds after integrating background knowledge collected from WordNet and NELL. are pre-trained on unlabeled text and then applied to MRC, in either a feature-based (Peters et al., 2018a) or a fine-tuning (Radford et al., 2018) manner, both offering substantial performance boosts. Among different pre-training mechanisms, BERT (Devlin et al., 2018), which uses Transformer encoder (Vaswani et al., 2017) and trains a bidirectional LM, is undoubtedly the most successful by far, presenting new state-of-the-art results in MRC and a wide variety of other language understanding tasks. Owing to the large amounts of unlabeled data and the sufficiently deep architectures used during pre-training, advanced LMs such as BERT are able to capture complex linguistic phenomena, understanding language better than previously appreciated (Peters et al., 2018b; Goldberg, 2019). However, as widely recognized, genuine reading comprehension requires not only language understanding, but also knowledge that supports sophisticated reasoning (Chen et al., 2016; Mihaylov and Frank, 2018; Bauer et al., 2018; Zhong 2347 et al., 2018). Thereby, we argue that pre-trained LMs, despite their powerfulness, could be further improved for MRC by integrating background knowledge. Fig. 1 gives a motivating example from ReCoRD (Zhang et al., 2018). In this example, the passage describes that Sudan faces trade sanctions from US due to its past support for North Korea. The cloze-style question states that Sudan is subject to the Trump’s ban, and asks the organization by which Sudan is deemed to be a state sponsor of terror. BERT fails on this case as there is not enough evidence in the text. But after introducing the world knowledge “Trump is the person who leads US” and word knowledge “sanctions has a common hypernym with ban”, we can reasonably infer that the answer is “US”. This example suggests the importance and necessity of integrating knowledge, even on the basis of a rather strong model like BERT. We refer interested readers to Appendix A for another motivating example from SQuAD1.1 (Rajpurkar et al., 2016). Thus, in this paper, we devise KT-NET (abbr. for Knowledge and Text fusion NET), a new approach to MRC which improves pre-trained LMs with additional knowledge from knowledge bases (KBs). The aim here is to take full advantage of both linguistic regularities covered by deep LMs and high-quality knowledge derived from curated KBs, towards better MRC. We leverage two KBs: WordNet (Miller, 1995) that records lexical relations between words and NELL (Carlson et al., 2010) that stores beliefs about entities. Both are useful for the task (see Fig. 1). Instead of introducing symbolic facts, we resort to distributed representations (i.e., embeddings) of KBs (Yang and Mitchell, 2017). With such KB embeddings, we could (i) integrate knowledge relevant not only locally to the reading text but also globally about the whole KBs; and (ii) easily incorporate multiple KBs at the same time, with minimal task-specific engineering (see § 2.2 for detailed explanation). As depicted in Fig. 2, given a question and passage, KT-NET first retrieves potentially relevant KB embeddings and encodes them in a knowledge memory. Then, it employs, in turn, (i) a BERT encoding layer to compute deep, context-aware representations for the reading text; (ii) a knowledge integration layer to select desired KB embeddings from the memory, and integrate them with BERT representations; (iii) a self-matching layer to fuse BERT and KB representations, so as to enable rich Question + Passage BERT Encoding Knowledge Integration Self-Matching Output Start probabilities End probabilities … Bilinear Softmax … Σ Concat Concat Sentinel vector BERT vector KB embeddings KBs Figure 2: Overall architecture of KT-NET (left), with the knowledge integration module illustrated (right). interactions among them; and (iv) an output layer to predict the final answer. In this way we enrich BERT with curated knowledge, combine merits of the both, and make knowledge-aware predictions. We evaluate our approach on two benchmarks: ReCoRD (Zhang et al., 2018) and SQuAD1.1 (Rajpurkar et al., 2016). On ReCoRD, a passage is generated from the first few paragraphs of a news article, and the corresponding question the rest of the article, which, by design, requires background knowledge and reasoning. On SQuAD1.1 where the best models already outperform humans, questions remaining unsolved are really difficult ones. Both are appealing testbeds for evaluating genuine reading comprehension capabilities. We show that incorporating knowledge can bring significant and consistent improvements to BERT, which itself is one of the strongest models on both datasets. The contributions of this paper are two-fold: (i) We investigate and demonstrate the feasibility of enhancing pre-trained LMs with rich knowledge for MRC. To our knowledge, this is the first study of its kind, indicating a potential direction for future research. (ii) We devise a new approach KTNET to MRC. It outperforms competitive baselines, ranks the 1st place on the ReCoRD leaderboard, and is also the best single model on the SQuAD1.1 leaderboard at the time of submission (March 4th, 2019). 2 Our Approach In this work we consider the extractive MRC task. Given a passage with m tokens P = {pi}m i=1 and a question with n tokens Q = {qj}n j=1, our goal 2348 is to predict an answer A which is constrained as a contiguous span in the passage, i.e., A = {pi}b i=a, with a and b indicating the answer boundary. We propose KT-NET for this task, the key idea of which is to enhance BERT with curated knowledge from KBs, so as to combine the merits of the both. To encode knowledge, we adopt knowledge graph embedding techniques (Yang et al., 2015) and learn vector representations of KB concepts. Given passage P and question Q, we retrieve for each token w ∈P ∪Q a set of potentially relevant KB concepts C(w), where each concept c ∈C(w) is associated with a learned vector embedding c. Based upon these pre-trained KB embeddings, KT-NET is built, as depicted in Fig. 2, with four major components: (i) a BERT encoding layer that computes deep, context-aware representations for questions and passages; (ii) a knowledge integration layer that employs an attention mechanism to select the most relevant KB embeddings, and integrates them with BERT representations; (iii) a self-matching layer that further enables rich interactions among BERT and KB representations; and (iv) an output layer that predicts the final answer. In what follows, we first introduce the four major components in § 2.1, and leave knowledge embedding and retrieval to § 2.2. 2.1 Major Components of KT-NET KT-NET consists of four major modules: BERT encoding, knowledge integration, self-matching, and final output, detailed as follows. BERT Encoding Layer This layer uses BERT encoder to model passages and questions. It takes as input passage P and question Q, and computes for each token a context-aware representation. Specifically, given passage P = {pi}m i=1 and question Q = {qj}n j=1, we first pack them into a single sequence of length m + n + 3, i.e., S = [⟨CLS⟩, Q, ⟨SEP⟩, P, ⟨SEP⟩], where ⟨SEP⟩is the token separating Q and P, and ⟨CLS⟩the token for classification (will not be used in this paper). For each token si in S, we construct its input representation as: h0 i = stok i + spos i + sseg i , where stok i , spos i , and sseg i are the token, position, and segment embeddings for si, respectively. Tokens in Q share a same segment embedding qseg, and tokens in P a same segment embedding pseg. Such input representations are then fed into L successive Transformer encoder blocks, i.e., hℓ i = Transformer(hℓ−1 i ), ℓ= 1, 2, · · · , L, so as to generate deep, context-aware representations for passages and questions. We refer readers to (Devlin et al., 2018; Vaswani et al., 2017) for details. The final hidden states {hL i }m+n+3 i=1 ∈Rd1 are taken as the output of this layer. Knowledge Integration Layer This layer is designed to further integrate knowledge into BERT, and is a core module of our approach. It takes as input the BERT representations {hL i } output from the previous layer, and enriches them with relevant KB embeddings, which makes the representations not only context-aware but also knowledge-aware. Specifically, for each token si, we get its BERT representation hL i ∈Rd1 and retrieve a set of potentially relevant KB concepts C(si), where each concept cj is associated with KB embedding cj ∈ Rd2. (We will describe the KB embedding and retrieval process later in § 2.2.) Then we employ an attention mechanism to adaptively select the most relevant KB concepts. We measure the relevance of concept cj to token si with a bilinear operation, and calculate the attention weight as: αij ∝exp(c⊤ j WhL i ), (1) where W ∈Rd2×d1 is a trainable weight parameter. As these KB concepts are not necessarily relevant to the token, we follow (Yang and Mitchell, 2017) to further introduce a knowledge sentinel ¯c ∈Rd2, and calculate its attention weight as: βi ∝exp(¯c⊤WhL i ). (2) The retrieved KB embeddings {cj} (as well as the sentinel ¯c) are then aligned to si and aggregated accordingly, i.e., ki = X j αijcj + βi¯c, (3) with P j αij+βi = 1.2 Here ki can be regarded as a knowledge state vector that encodes extra KB information w.r.t. the current token. We concatenate ki with the BERT representation hL i and output ui = [hL i , ki] ∈Rd1+d2, which is by nature not only context-aware but also knowledge-aware. 2We set ki = 0 if C(si) = ∅. 2349 Self-Matching Layer This layer takes as input the knowledge-enriched representations {ui}, and employs a self-attention mechanism to further enable interactions among the context components {hL i } and knowledge components {ki}. It is also an important module of our approach. We model both direct and indirect interactions. As for direct interactions, given two tokens si and sj (along with their knowledge-enriched representations ui and uj), we measure their similarity with a trilinear function (Seo et al., 2017): rij = w⊤[ui, uj, ui ⊙uj], and accordingly obtain a similarity matrix R with rij being the ij-th entry. Here ⊙denotes elementwise multiplication, and w ∈R3d1+3d2 is a trainable weight parameter. Then, we apply a row-wise softmax operation on R to get the self-attention weight matrix A, and compute for each token si an attended vector vi, i.e., aij = exp(rij) P j exp(rij), vi = X j aijuj, where aij is the ij-th entry of A. vi reflects how each token sj interacts directly with si. Aside from direct interactions, indirect interactions, e.g., the interaction between si and sj via an intermediate token sk, are also useful. To further model such indirect interactions, we conduct a self-multiplication of the original attention matrix A, and compute for each token si another attended vector ¯vi, i.e., ¯A = A2, ¯vi = X j ¯aijuj, where ¯aij is the ij-th entry of ¯A. ¯vi reflects how each token sj interacts indirectly with si, through all possible intermediate tokens. Finally, we build the output for each token by a concatenation oi = [ui, vi, ui −vi, ui ⊙vi, ¯vi, ui −¯vi] ∈R6d1+6d2. Output Layer We follow BERT and simply use a linear output layer, followed by a standard softmax operation, to predict answer boundaries. The probability of each token si to be the start or end position of the answer span is calculated as: p1 i = exp(w⊤ 1 oi) P j exp(w⊤ 1 oj), p2 i = exp(w⊤ 2 oi) P j exp(w⊤ 2 oj), where {oi} are output by the self-matching layer, and w1, w2 ∈R6d1+6d2 are trainable parameters. The training objective is the log-likelihood of the true start and end positions: L = −1 N N X j=1 (log p1 y1 j + log p2 y2 j ), where N is the number of examples in the dataset, and y1 j , y2 j are the true start and end positions of the j-th example, respectively. At inference time, the span (a, b) where a ≤b with maximum p1 ap2 b is chosen as the predicted answer. 2.2 Knowledge Embedding and Retrieval Now we introduce the knowledge embedding and retrieval process. We use two KBs: WordNet and NELL, both stored as (subject, relation, object) triples, where each triple is a fact indicating a specific relation between two entities. WordNet stores lexical relations between word synsets, e.g., (organism, hypernym of, animal). NELL stores beliefs about entities, where the subjects are usually real-world entities and the objects are either entities, e.g., (Coca Cola, headquartered in, Atlanta), or concepts, e.g., (Coca Cola, is a, company). Below we shall sometimes abuse terminologies and refer to synsets, real-world entities, and concepts as “entities”. As we have seen in Fig. 1, both KBs are useful for MRC. KB Embedding In contrast to directly encoding KBs as symbolic (subject, relation, object) facts, we choose to encode them in a continuous vector space. Specifically, given any triple (s, r, o), we would like to learn vector embeddings of subject s, relation r, and object o, so that the validity of the triple can be measured in the vector space based on the embeddings. We adopt the BILINEAR model (Yang et al., 2015) which measures the validity via a bilinear function f(s, r, o) = s⊤diag(r)o. Here, s, r, o ∈Rd2 are the vector embeddings associated with s, r, o, respectively, and diag(r) is a diagonal matrix with the main diagonal given by r. Triples already stored in a KB are supposed to have higher validity. A margin-based ranking loss is then accordingly designed to learn the embeddings (refer to (Yang et al., 2015) for details). After this embedding process, we obtain a vector representation for each entity (as well as relation) of the two KBs. KB Concepts Retrieval In this work, we treat WordNet synsets and NELL concepts as knowl2350 edge to be retrieved from KBs, similar to (Yang and Mitchell, 2017). For WordNet, given a passage or question word, we return its synsets as candidate KB concepts. For NELL, we first recognize named entities from a given passage and question, link the recognized mentions to NELL entities by string matching, and then collect the corresponding NELL concepts as candidates. Words within a same entity name and subwords within a same word will share the same retrieved concepts, e.g., we retrieve the NELL concept “company” for both “Coca” and “Cola”. After this retrieval process, we obtain a set of potentially relevant KB concepts for each token in the input sequence, where each KB concept is associated with a vector embedding. Advantages Previous attempts that leverage extra knowledge for MRC (Bauer et al., 2018; Mihaylov and Frank, 2018) usually follow a retrievethen-encode paradigm, i.e., they first retrieve relevant knowledge from KBs, and only the retrieved knowledge—which is relevant locally to the reading text—will be encoded and integrated for MRC. Our approach, by contrast, first learns embeddings for KB concepts with consideration of the whole KBs (or at least sufficiently large subsets of KBs). The learned embeddings are then retrieved and integrated for MRC, which are thus relevant not only locally to the reading text but also globally about the whole KBs. Such knowledge is more informative and potentially more useful for MRC. Moreover, our approach offers a highly convenient way to simultaneously integrate knowledge from multiple KBs. For instance, suppose we retrieve for token si a set of candidate KB concepts C1(si) from WordNet, and C2(si) from NELL. Then, we can compute a knowledge state vector k1 i based on C1(si), and k2 i based on C2(si), which are further combined with the BERT hidden state hL i to generate ui = [hL i , k1 i , k2 i ]. As such, ui naturally encodes knowledge from both KBs (see the knowledge integration layer for technical details). 3 Experiments 3.1 Datasets In this paper we empirically evaluate our approach on two benchmarks: ReCoRD and SQuAD1.1. ReCoRD—acronym for the Reading Comprehension with Commonsense Reasoning Dataset— is a large-scale MRC dataset requiring commonsense reasoning (Zhang et al., 2018). It consists Dataset Train Dev Test ReCoRD 100,730 10,000 10,000 SQuAD1.1 87,599 10,570 9,533 Table 1: The number of training, development, and test examples of ReCoRD and SQuAD1.1. of passage-question-answer tuples, collected from CNN and Daily Mail news articles. In each tuple, the passage is formed by the first few paragraphs of a news article, with named entities recognized and marked. The question is a sentence from the rest of the article, with a missing entity specified as the golden answer. The goal is to find the golden answer among the entities marked in the passage, which can be deemed as an extractive MRC task. This data collection process by design generates questions that require external knowledge and reasoning. It also filters out questions that can be answered simply by pattern matching, posing further challenges to current MRC systems. We take it as the major testbed for evaluating our approach. SQuAD1.1 (Rajpurkar et al., 2016) is a wellknown extractive MRC dataset that consists of questions created by crowdworkers for Wikipedia articles. The golden answer to each question is a span from the corresponding passage. In this paper, we focus more on answerable questions than unanswerable ones. Hence, we choose SQuAD1.1 rather than SQuAD2.0 (Rajpurkar et al., 2018). Table 1 provides the statistics of ReCoRD and SQuAD1.1. On both datasets, the training and development (dev) sets are publicly available, but the test set is hidden. One has to submit the code to retrieve the final test score. As frequent submissions to probe the unseen test set are not encouraged, we only submit our best single model for testing,3 and conduct further analysis on the dev set. Both datasets use Exact Match (EM) and (macro-averaged) F1 as the evaluation metrics (Zhang et al., 2018). 3.2 Experimental Setups Data Preprocessing We first prepare pre-trained KB embeddings. We use the resources provided by Yang and Mitchell (2017), where the WordNet embeddings were pre-trained on a subset consisting of 151,442 triples with 40,943 synsets and 18 relations, and the NELL embeddings pre-trained on a subset containing 180,107 entities and 258 3In this paper, we restrict ourselves to improvements involving a single model, and hence do not consider ensembles. 2351 concepts. Both groups of embeddings are 100-D. Refer to (Yang and Mitchell, 2017) for details. Then we retrieve knowledge from the two KBs. For WordNet, we employ the BasicTokenizer built in BERT to tokenize text, and look up synsets for each word using NLTK (Bird and Loper, 2004). Synsets within the 40,943 subset are returned as candidate KB concepts for the word. For NELL, we link entity mentions to the whole KB, and return associated concepts within the 258 subset as candidate KB concepts. Entity mentions are given as answer candidates on ReCoRD, and recognized by Stanford CoreNLP (Manning et al., 2014) on SQuAD1.1. Finally, we follow Devlin et al. (2018) and use the FullTokenizer built in BERT to segment words into wordpieces. The maximum question length is set to 64. Questions longer than that are truncated. The maximum input length (|S|) is set to 384. Input sequences longer than that are segmented into chunks with a stride of 128. The maximum answer length at inference time is set to 30. Comparison Setting We evaluate our approach in three settings: KT-NETWordNet, KT-NETNELL, and KT-NETBOTH, to incorporate knowledge from WordNet, NELL, and both of the two KBs, respectively. We take BERT as a direct baseline, in which only the BERT encoding layer and output layer are used, and no knowledge will be incorporated. Our BERT follows exactly the same design as the original paper (Devlin et al., 2018). Besides BERT, we further take top-ranked systems on each dataset as additional baselines (will be detailed in § 3.3). Training Details For all three settings of KTNET (as well as BERT), we initialize parameters of the BERT encoding layer with pre-trained models officially released by Google4. These models were pre-trained on the concatenation of BooksCorpus (800M words) and Wikipedia (2,500M words), using the tasks of masked language model and next sentence prediction (Devlin et al., 2018). We empirically find that the cased, large model—which is case sensitive and contains 24 Transformer encoding blocks, each with 16 self-attention heads and 1024 hidden units— performs the best on both datasets. Throughout our experiments, we use this setting unless specified otherwise. Other trainable parameters are randomly initialized. 4https://github.com/google-research/bert Model Dev Test EM F1 EM F1 Leaderboard (Mar. 4th, 2019) Human 91.28 91.64 91.31 91.69 #1 DCReader+BERT – – 70.49 71.98 #2 BERTBASE – – 55.99 57.99 #3 DocQA w/ ELMo 44.13 45.39 45.44 46.65 #4 SAN 38.14 39.09 39.77 40.72 #5 DocQA 36.59 37.89 38.52 39.76 Ours BERT 70.22 72.16 – – KT-NETWordNet 70.56 72.75 – – KT-NETNELL 70.54 72.52 – – KT-NETBOTH 71.60 73.61 73.01 74.76 Table 2: Results on ReCoRD. The top 5 systems are all single models and chosen for comparison. Model Dev Test EM F1 EM F1 Leaderboard (Mar. 4th, 2019) Human 80.3 90.5 82.30 91.22 #1 BERT+TriviaQA 84.2 91.1 85.08 91.83 #2 WD – – 84.40 90.56 #3 nlnet – – 83.47 90.13 #4 MARS – – 83.19 89.55 #5 QANet – – 82.47 89.31 Ours BERT 84.41 91.24 – – KT-NETWordNet 85.15 91.70 85.94 92.43 KT-NETNELL 85.02 91.69 – – KT-NETBOTH 84.96 91.64 – – Table 3: Results on SQuAD1.1. The top 5 single models are chosen for comparison. We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 3e-5 and a batch size of 24. The number of training epochs is chosen from {2,3,4}, according to the best EM+F1 score on the dev set of each dataset. During training, the pre-trained BERT parameters will be fine-tuned with other trainable parameters, and the KB embeddings will be kept fixed, which is empirically observed to offer the best performance. 3.3 Results On ReCoRD and SQuAD1.1, we compare our approach to BERT and the top 5 (single) models on the leaderboard (exclusive of ours). The results are given in Table 2 and Table 3, respectively, where the scores of the non-BERT baselines are taken directly from the leaderboard and/or literature. On ReCoRD5 (Table 2): (i) DCReader+BERT is the former top leaderboard system(unpublished) prior to our submission; (ii) BERTBASE is BERT with the base setting (12 Transformer blocks, each 5https://sheng-z.github.io/ReCoRD-explorer/ 2352 Concepts from NELL: US 1. geopoliticalorganization: 0.874 2. geopoliticallocation: 0.122 3. organization: 0.003 UN 1. nongovorganization: 0.986 2. sentinel: 0.012 3. terroristorganization: 0.001 Concepts from WordNet: ban 1. forbidding_NN_1: 0.861 2. proscription_NN_1: 0.135 3. ban_VB_2: 0.002 sanctions 1. sanction_VB_1: 0.336 2. sanction_NN_3: 0.310 3. sanction_NN_4: 0.282 (a) KT-NET (a) (b) BERT Figure 3: Case study. Heat maps present similarities between question (row) and passage (column) words. Line charts show probabilities of answer boundaries. In KT-NET, top 3 most relevant KB concepts are further given. with 12 self-attention heads and 768 hidden units); (iii) DocQA (Liu et al., 2018) and SAN (Clark and Gardner, 2018) are two previous state-of-the-art MRC models; (iv) the pre-trained LM ELMo (Peters et al., 2018a) is further used in DocQA. All these models, except for DCReader+BERT, were re-implemented by the creators of the dataset and provided as official baselines (Zhang et al., 2018). On SQuAD6 (Table 3): (i) BERT+TriviaQA is the former best model officially submitted by Google. It is an uncased, large model, and further uses data augmentation with TriviaQA (Joshi et al., 2017); (ii) WD, nlnet, and MARS are three competitive models that have not been published; (iii) QANet is a well performing MRC model proposed by Yu et al. (2018), and later re-implemented and submitted by Google Brain & CMU. Results on dev sets show that (i) KT-NET consistently outperforms BERT (which itself already surpasses all the other baselines), irrespective of which KB is used, and on both datasets. Our best KT-NET model offers a 1.38/1.45 improvement in EM/F1 over BERT on ReCoRD, and a 0.74/0.46 improvement in EM/F1 on SQuAD1.1. (ii) Both KBs are capable of improving BERT for MRC, but the best setting varies across datasets. Integrating both KBs performs best on ReCoRD, while using WordNet alone is a better choice on SQuAD1.1. Results on test sets further demonstrate the superiority of our approach. It significantly outperforms the former top leaderboard system by +2.52 EM/+2.78 F1 on ReCoRD. And on SQuAD1.1, 6https://rajpurkar.github.io/SQuAD-explorer/ although little room for improvement, it still gets a meaningful gain of +0.86 EM/+0.60 F1 over the former best single model. 4 Case Study This section provides a case study, using the motivating example described in Fig. 1, to vividly show the effectiveness of KT-NET, and make a direct comparison with BERT. For both methods, we use the optimal configurations that offer their respective best performance on ReCoRD (where the example comes from). Relevant Knowledge Selection We first explore how KT-NET can adaptively select the most relevant knowledge w.r.t. the reading text. Recall that given a token si, the relevance of a retrieved KB concept cj is measured by the attention weight αij (Eq. (1)), according to which we can pick the most relevant KB concepts for this token. Fig. 3(a) (left) presents 4 tokens from the question/passage, each associated with top 3 most relevant concepts from NELL or WordNet. As we can see, these attention distributions are quite meaningful, with “US” and “UN” attending mainly to the NELL concepts of “geopoliticalorganization” and “nongovorganization”, respectively, “ban” mainly to the WordNet synset “forbidding NN 1”, and “sanction” almost uniformly to the three highly relevant synsets. Question/Passage Representations We further examine how such knowledge will affect the final representations learned for the question/passage. We consider all sentences listed in Fig. 1, and con2353 tent words (nouns, verbs, adjectives, and adverbs) therein. For each word si, we take its final representation oi, obtained right before the output layer. Then we calculate the cosine similarity cos(oi, oj) between each question word si and passage word sj. The resultant similarity matrices are visualized in Fig. 3(a) and Fig. 3(b) (heat maps), obtained by KT-NET and BERT, respectively.7 For BERT (Fig. 3(b)), given any passage word, all question words tend to have similar similarities to the given word, e.g., all the words in the question have a low degree of similarity to the passage word “US”, while a relatively high degree of similarity to “repealed”. Such phenomenon indicates that after fine-tuning in the MRC task, BERT tends to learn similar representations for question words, all of which approximately express the meaning of the whole question and are hard to distinguish. For KT-NET (Fig. 3(a)), by contrast, different question words can exhibit diverse similarities to a passage word, and these similarities may perfectly reflect their relationships encoded in KBs. For example, we can observe relatively high similarities between: (i) “administration” and “government” which share a same synset, (ii) “ban” and “sanctions” which have a common hypernym, and (iii) “sponsor” and “support” where a synset of the former has the relation “derivationally related form” with the latter, all in WordNet. Such phenomenon indicates that after integrating knowledge, KT-NET can learn more accurate representations which enable better question-passage matching. Final Answer Prediction Fig. 3(a) and Fig. 3(b) (line charts) list the probability of each word to be start/end of the answer, predicted by KT-NET and BERT, respectively. BERT mistakenly predicts the answer as “UN Security Council”, but our method successfully gets the correct answer “US”. We observed similar phenomena on SQuAD1.1 and report the results in Appendix B. 5 Related Work Machine Reading Comprehension In the last few years, a number of datasets have been created for MRC, e.g., CNN/DM (Hermann et al., 2015), SQuAD (Rajpurkar et al., 2016, 2018), SearchQA (Dunn et al., 2017), TriviaQA (Joshi et al., 2017), and MS-MARCO (Nguyen et al., 2016). These 7During visualization, we use a row-wise softmax operation to normalize similarity scores over all passage tokens. datasets have led to advances like Match-LSTM (Wang and Jiang, 2017), BiDAF (Seo et al., 2017), AoA Reader (Cui et al., 2017), DCN (Xiong et al., 2017), R-Net (Wang et al., 2017), and QANet (Yu et al., 2018). These end-to-end neural models have similar architectures, starting off with an encoding layer to encode every question/passage word as a vector, passing through various attention-based interaction layers and finally a prediction layer. More recently, LMs such as ELMo (Peters et al., 2018b), GPT (Radford et al., 2018), and BERT (Devlin et al., 2018) have been devised. They pre-train deep LMs on large-scale unlabeled corpora to obtain contextual representations of text. When used in downstream tasks including MRC, the pre-trained contextual representations greatly improve the performance in either a fine-tuning or feature-based way. Built upon pre-trained LMs, our work further explores the potential of incorporating structured knowledge from KBs, combining the strengths of both text and knowledge representations. Incorporating KBs Several MRC datasets that require external knowledge have been proposed, such as ReCoRD (Zhang et al., 2018), ARC (Clark et al., 2018), MCScript (Ostermann et al., 2018), OpenBookQA (Mihaylov et al., 2018) and CommonsenseQA (Talmor et al., 2018). ReCoRD can be viewed as an extractive MRC dataset, while the later four are multi-choice MRC datasets, with relatively smaller size than ReCoRD. In this paper, we focus on the extractive MRC task. Hence, we choose ReCoRD and SQuAD in the experiments. Some previous work attempts to leverage structured knowledge from KBs to deal with the tasks of MRC and QA. Weissenborn et al. (2017), Bauer et al. (2018), Mihaylov and Frank (2018), Pan et al. (2019), Chen et al. (2018), Wang et al. (2018) follow a retrieve-then-encode paradigm, i.e., they first retrieve relevant knowledge from KBs, and only the retrieved knowledge relevant locally to the reading text will be encoded and integrated. By contrast, we leverage pre-trained KB embeddings which encode whole KBs. Then we use attention mechanisms to select and integrate knowledge that is relevant locally to the reading text. Zhong et al. (2018) try to leverage pre-trained KB embeddings to solve the multi-choice MRC task. However, the knowledge and text modules are not integrated,but used independently to predict the answer. And the model cannot be applied to extractive MRC. 2354 6 Conclusion This paper introduces KT-NET for MRC, which enhances BERT with structured knowledge from KBs and combines the merits of the both. We use two KBs: WordNet and NELL. We learn embeddings for the two KBs, select desired embeddings from them, and fuse the selected embeddings with BERT hidden states, so as to enable context- and knowledge-aware predictions.Our model achieves significant improvements over previous methods, becoming the best single model on ReCoRD and SQuAD1.1 benchmarks. This work demonstrates the feasibility of further enhancing advanced LMs with knowledge from KBs, which indicates a potential direction for future research. Acknowledgments This work is supported by the Natural Science Foundation of China (No.61533018, 61876009 and 61876223) and Baidu-Peking University Joint Project. We would like to thank the ReCoRD and SQuAD teams for evaluating our results on the anonymous test sets. And we are grateful to Tom M. Mitchell and Bishan Yang for sharing with us the valuable KB resources. We would also like to thank the anonymous reviewers for their insightful suggestions. References Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question answering tasks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4220–4230. Association for Computational Linguistics. Steven Bird and Edward Loper. 2004. Nltk: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, page 31. Association for Computational Linguistics. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka, and Tom M Mitchell. 2010. Toward an architecture for never-ending language learning. In Proceedings of the TwentyFourth AAAI Conference on Artificial Intelligence, pages 1306–1313. AAAI Press. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2358–2367. Association for Computational Linguistics. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2406–2417. Association for Computational Linguistics. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 845–855. Association for Computational Linguistics. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the AI2 reasoning challenge. arXiv e-prints, arXiv:1803.05457. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 593–602. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv e-prints, arXiv:1810.04805. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv e-prints, arXiv:1704.05179. Yoav Goldberg. 2019. Assessing bert’s syntactic abilities. arXiv e-prints, arXiv:1901.05287. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1693– 1701. Curran Associates, Inc. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for machine reading comprehension. In Proceedings of the 2355 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1694–1704. Association for Computational Linguistics. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Association for Computational Linguistics. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381–2391. Association for Computational Linguistics. Todor Mihaylov and Anette Frank. 2018. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 821–832. Association for Computational Linguistics. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches, pages 96–105. Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2018. Semeval2018 task 11: Machine comprehension using commonsense knowledge. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 747–757. Association for Computational Linguistics. Xiaoman Pan, Kai Sun, Dian Yu, Heng Ji, and Dong Yu. 2019. Improving question answering with external knowledge. arXiv e-prints, arXiv:1902.00993. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, Technical report, OpenAI. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Association for Computational Linguistics. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations (ICLR). Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv e-prints, arXiv:1811.00937. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Liang Wang, Meng Sun, Wei Zhao, Kewei Shen, and Jingming Liu. 2018. Yuanfudao at semeval-2018 task 11: Three-way attention and relational knowledge for commonsense machine comprehension. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 758–762. Association for Computational Linguistics. Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-lstm and answer pointer. In International Conference on Learning Representations (ICLR). Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 189–198. Association for Computational Linguistics. Dirk Weissenborn, Tom´aˇs Koˇcisk`y, and Chris Dyer. 2017. Dynamic integration of background knowledge in neural NLU systems. arXiv e-prints, arXiv:1706.02596. 2356 Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In International Conference on Learning Representations (ICLR). Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1436–1446. Association for Computational Linguistics. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In International Conference on Learning Representations (ICLR). Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In International Conference on Learning Representations (ICLR). Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv e-prints, arXiv:1810.12885. Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2018. Improving question answering by commonsense-based pre-training. arXiv e-prints, arXiv:1809.03568. A Motivating Example from SQuAD1.1 We provide a motivating example from SQuAD1.1 to show the importance and necessity of integrating background knowledge. We restrict ourselves to knowledge from WordNet, which offers the best performance on this dataset according to our experimental results (Table 3). Fig. 4 presents the example. The passage states that the congress aimed to formalize a unified front in trade and negotiations with various Indians, but the plan was never ratified by the colonial legislatures nor approved of by the crown. And the question asks whether the plan was formalized. BERT fails on this case by spuriously matching the two “formalize” appearing in the passage and question. But after introducing the word knowledge “ratified is a hypernym of formalized” and “approved has a common hypernym with formalized”, we can successfully predict that the correct answer is “never ratified by the colonial legislatures nor approved of by the crown”. Passage: [...] The goal of the congress was to formalize a unified front in trade and negotiations with various Indians, since allegiance of the various tribes and nations was seen to be pivotal in the success in the war that was unfolding. The plan that the delegates agreed to was never ratified by the colonial legislatures nor approved of by the crown. [...] Question: Was the plan formalized? Original BERT prediction: formalize a unified front in trade and negotiations with various Indians Prediction with background knowledge: never ratified by the colonial legislatures nor approved of by the crown Background knowledge: (ratified, hypernym-of, formalized) (approved, common-hypernym-with, formalized) Figure 4: An example from SQuAD1.1. The vanilla BERT model fails to predict the correct answer. But it succeeds after integrating background knowledge collected from WordNet. B Case Study on SQuAD1.1 We further provide a case study, using the above example, to vividly show the effectiveness of our method KT-NET, and make a direct comparison with BERT. We use the same analytical strategy as described in § 4. For both KT-NET and BERT, we use the optimal configurations that offer their respective best performance on SQuAD1.1 (where the example comes from). Relevant Knowledge Selection We first explore how KT-NET can adaptively select the most relevant knowledge w.r.t. the reading text. Fig.5(a) (left) presents 3 words from the question/passage, each associated with top 3 most relevant synsets from WordNet.8 Here the relevance of synset cj to word si is measured by the attention weight αij (Eq. (1)).9 As we can see, these attention distributions are quite meaningful, with “ratified” attending mainly to WordNet synset “sign VB 2”, “formalized” mainly to synset “formalize VB 1”, and “approved” mainly to synsets“approve VB 2” and “sanction VB 1”. Question/Passage Representations We further examine how such knowledge will affect the final representations learned for the question/passage. We consider all sentences listed in Fig. 4, and content words (nouns, verbs, adjectives, and adverbs) therein. For each word si, we take its final repre8We retrieve a single synset “sign VB 2” for “ratified”. 9If word si consists of multiple subwords, we average the relevance of cj over these subwords. 2357 formalized 1. formalize_VB_1: 0.948 2. validate_VB_1: 0.046 3. formalized_JJ_1: 0.006 ratified 1. sign_VB_2: 0.991 2. sentinel: 0.009 approved 1. approve_VB_2: 0.791 2. sanction_VB_1: 0.206 3. sentinel: 0.003 (a) KT-NET (b) BERT Figure 5: Case study. Heat maps present similarities between question (row) and passage (column) words. Line charts show probabilities of answer boundaries. In KT-NET, top 3 most relevant KB concepts are further given. sentation oi, obtained right before the output layer. Then we calculate the cosine similarity cos(oi, oj) between each question word si and passage word sj. The resultant similarity matrices are visualized in Fig.5(a) and Fig.5(b) (heat maps), obtained by KT-NET and BERT, respectively.10 For BERT (Fig.5(b)), we observe very similar patterns as in the ReCoRD example (§ 4). Given any passage word, all question words tend to have similar similarities to the given word, e.g., all the words in the question have a low degree of similarity to the passage word “never”, while a relatively high degree of similarity to “various”. Such phenomenon indicates, again, that after fine-tuning in the MRC task, BERT tends to learn similar representations for question words, all of which approximately express the meaning of the whole question and are hard to distinguish. For KT-NET (Fig.5(a)), although the similarities between question and passage words are generally higher, these similarities may still perfectly reflect their relationships encoded in KBs. For example, we can observe relatively high similarities between: (i) “formalized” and “ratified” where the latter is a hypernym of the former; (ii) “formalized” and “approved” which share a common hypernym in WordNet. Such phenomenon indicates, again, that after integrating knowledge, KT-NET can learn more accurate representations which enable better question-passage matching. Final Answer Prediction With the learned representations, predicting final answers is a natural next step. Fig.5(a) and Fig.5(b) (line charts) list 10During visualization, we take the averaged cosine similarity if word si or word sj has subwords. And we use a rowwise softmax operation to normalize similarity scores over all passage tokens. the probability of each word to be the start/end of the answer, predicted by KT-NET and BERT, respectively. As we can see, BERT mistakenly predicts the answer as “formalize a unified front in trade and negotiations with various Indians”, but our method successfully gets the correct answer “never ratified by the colonial legislatures nor approved of by the crown”. The phenomena observed here are quite similar to those observed in the ReCoRD example, both demonstrating the effectiveness of our method and its superiority over BERT.
2019
226
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2358–2368 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2358 XQA: A Cross-lingual Open-domain Question Answering Dataset Jiahua Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun∗ Department of Computer Science and Technology, Institute for Artificial Intelligence, State Key Lab on Intelligent Technology and Systems, Tsinghua University, Beijing, China [email protected],[email protected] {liuzy,sms}@tsinghua.edu.cn Abstract Open-domain question answering (OpenQA) aims to answer questions through text retrieval and reading comprehension. Recently, lots of neural network-based models have been proposed and achieved promising results in OpenQA. However, the success of these models relies on a massive volume of training data (usually in English), which is not available in many other languages, especially for those low-resource languages. Therefore, it is essential to investigate cross-lingual OpenQA. In this paper, we construct a novel dataset XQA for cross-lingual OpenQA research. It consists of a training set in English as well as development and test sets in eight other languages. Besides, we provide several baseline systems for cross-lingual OpenQA, including two machine translation-based methods and one zero-shot cross-lingual method (multilingual BERT). Experimental results show that the multilingual BERT model achieves the best results in almost all target languages, while the performance of cross-lingual OpenQA is still much lower than that of English. Our analysis indicates that the performance of cross-lingual OpenQA is related to not only how similar the target language and English are, but also how difficult the question set of the target language is. The XQA dataset is publicly available at http://github.com/thunlp/XQA. 1 Introduction In recent years, open-domain question answering (OpenQA), which aims to answer open-domain questions with a large-scale text corpus, has attracted lots of attention from natural language processing researchers. Chen et al. (2017) proposed DrQA model, which used a text retriever to obtain relevant documents from Wikipedia, and further applied a trained reading comprehension model ∗Corresponding author: Maosong Sun to extract the answer from the retrieved documents. Moreover, researchers have introduced more sophisticated models, which either aggregate all informative evidence (Lin et al., 2018; Wang et al., 2018b) or filter out those noisy retrieved text (Clark and Gardner, 2018; Choi et al., 2017; Wang et al., 2018a) to better predict the answers for open-domain questions. Benefiting from the power of neural networks, these models have achieved remarkable results in OpenQA. However, these neural-based models must be trained with a huge volume of labeled data. Collecting and labeling large-size training data for each language is often intractable and unrealistic, especially for those low-resource languages. In this case, it is impossible to directly apply existing OpenQA models to many different languages. To address this problem, an alternative approach is to build a cross-lingual OpenQA system. It is trained on data in one high-resource source language such as English, and predicts answers for open-domain questions in other target languages. In fact, cross-lingual OpenQA can be viewed as a particular task of cross-lingual language understanding (XLU). Recently, XLU has been applied to many natural language processing tasks such as cross-lingual document classification (Schwenk and Li, 2018), cross-lingual natural language inference (Conneau et al., 2018b), and machine translation (Lample et al., 2018). Most cross-lingual models focus on word or sentence level understanding, while the interaction between questions and documents as well as the overall understanding of the documents are essential to OpenQA. To the best of our knowledge, there is still no dataset for cross-lingual OpenQA. In this paper, we introduce a cross-lingual OpenQA dataset called XQA. It consists of a training set in English, and development and test sets in English, French, German, Portuguese, Polish, 2359 Language Question Answer English Do you know that the <Query> is the largest stingray in the Atlantic Ocean, at up to across and weighing? Roughtail stingray Chinese 你知道<Query>可以在美国无限期居住和工作,并持有称 为“绿卡”的证件? 美国永久居民 French Le saviez-vous le <Query> est une forme de danse classique indienne originaire du sud de l’Inde? Bharata natyam German Schon gewusst die ersten <Query> entstanden in den 1960er Jahren durch Kreuzungsversuche und zeichneten sich durch einen intensiven Duft aus? Englische Rosen Polish Czy wiesz <Query> w Wojewódzkim Parku Kultury i Wypoczynku w Chorzowie i Katowicach to najdłu˙zsza nizinna kolej linowa w Europie? Kolej linowa „Elka” Portuguese Sabia que no curso da história, <Query> foi destruída duas vezes, sitiada 23 vezes, atacada 52 vezes, e capturada e recapturada 44 vezes? Jerusalém Russian термин <Query> был введен в 1981 для обозначения усиления слабого сигнала при наложении шума Стохастический резонанс Tamil Ukrainian 22 жовтня 2006 року на гран-прi Бразилiї семиразовий чемпiон свiту з автоперегонiв «Формула-1» <Query> закiнчив кар’єру гонщика. Гран-прi Бразилiї 2006 стало 250-им гран-прi в кар’єрi гонщика за 16 рокiв виступiв. Мiхаель Шумахер Table 1: Some examples in various languages from the XQA corpus. Chinese, Russian, Ukrainian, and Tamil. The training set contains 56, 279 English questionanswer pairs along with relevant documents. The development and test sets contain a total amount of 17, 358 and 16, 973 question-answer pairs respectively. All questions are naturally produced by native speakers, and potentially reflect cultural differences in different languages. Moreover, we build several baseline systems that use the information of multilingual data from publicly available corpora for cross-lingual OpenQA, including two translation-based methods that translate training data and test data respectively and one zero-shot cross-lingual method (multilingual BERT (Devlin et al., 2019)). We evaluate the performance of the proposed baselines in terms of text retrieval and reading comprehension for different target languages on the XQA dataset. The experimental results demonstrate that there is a gap between the performance in English and that in cross-lingual setting. The multilingual BERT model achieves the best performance in almost all target languages, while translation-based methods suffer from the problem of translating name entities. We show that the performance on the XQA dataset depends on not only how similar the target language and English are, but also how difficult the question set of the target language is. Based on the results, we further discuss potential improvement for cross-lingual OpenQA systems. We will release the dataset and baseline systems online with the hope that this could contribute to the research of cross-lingual OpenQA and overall cross-lingual language understanding. 2 Related Work 2.1 Open-domain Question Answering OpenQA, first proposed by Green et al. (1986), aims to answer an open-domain question by utilizing external resources. In the past years, most work in this area has focused on using documents (Voorhees et al., 1999), online webpages (Kwok et al., 2001), and structured knowledge graphs (Bordes et al., 2015). Recently, with the advancement of reading comprehension technique (Chen 2360 et al., 2016; Dhingra et al., 2017; Cui et al., 2017), Chen et al. (2017) utilized both the information retrieval and reading comprehension techniques to answer open-domain questions. However, it usually suffers from the noise problem since the data is constructed under the distant supervision assumption. Hence researchers have made various attempts to alleviate the noise problem in OpenQA. Wang et al. (2018a) and Choi et al. (2017) performed paragraph selection before extracting answer of the question. Min et al. (2018) proposed to select a minimal set of sentences with sufficient information to answer the questions, while Lin et al. (2018) and Wang et al. (2018b) took all informative paragraphs into consideration by aggregating evidence in multiple paragraphs. Moreover, Clark and Gardner (2018) applied a shared-normalization learning objective on sampling paragraphs. All the models mentioned above were only verified in a single language (usually in English) with vast volumes of labeled data, and cannot be easily extended to the cross-lingual scenario. 2.2 Cross-lingual Language Understanding Recent years, plenty of work has focused on multilingual word representation learning, including learning from parallel corpus (Gouws et al., 2015; Luong et al., 2015), with a bilingual dictionary (Zhang et al., 2016; Artetxe et al., 2018), and even in a fully unsupervised manner (Conneau et al., 2018a). These multilingual word representation models could be easily extended to multilingual sentence representation by averaging the representations of all words (Klementiev et al., 2012). Nevertheless, this method does not take into account the structure information of sentences. To address this issue, much effort has been devoted to using the context vector of NMT system as multilingual sentence representation (Schwenk and Douze, 2017; Espana-Bonet et al., 2017). Recently, Artetxe and Schwenk (2018) proposed to utilize a single encoder to learn joint multilingual sentence representations for 93 languages. Besides, Devlin et al. (2019) also released a multilingual version of BERT which encoded over 100 languages with a unified encoder. These models have shown their effectiveness in several cross-lingual NLP tasks such as document classification (Klementiev et al., 2012), textual similarity (Cer et al., 2017), natural language inference (Conneau et al., 2018b), and dialog system (Schuster et al., 2019). However, there is still no existing benchmark for cross-lingual OpenQA. In addition, another line of research attempts to answer questions in one language using documents in other languages (Magnini et al., 2004; Vallin et al., 2005; Magnini et al., 2006). Different from their setting, we emphasize on building question answering systems for other languages using labeled data from a rich source language such as English, while the documents are in the same language as the questions. 3 Cross-lingual Open-domain Question Answering Existing OpenQA models usually first retrieve documents related to the question from the largescale text corpus using information retrieval module, and then predict the answer from these retrieved documents through reading comprehension module. Formally, given a question Q, the OpenQA system first retrieves m documents (paragraphs) P = {p1, p2, · · · , pm} corresponding to the question Q through information retrieval system, and then models the probability distribution of the answer given the question and the documents Pr(A|Q, P). In cross-lingual OpenQA task, we are given a source language Ds = {(Qs i, As i, P s i )}ns i=1 with ns labeled examples, and a target language Dt = {(Qt i, P t i )}nt i=1 with nt unlabeled examples. The cross-lingual OpenQA system aims to learn language independent features, and then build an answer predictor that is able to model the answer prediction probability Prt(At|Qt i, P t i ) for target language under the supervision from source language. In the following part of this section, we will introduce our baseline systems for cross-lingual OpenQA, including two translation-based methods and one zero-shot cross-lingual method. 3.1 Translation-Based Methods The most straightforward solution for crosslingual OpenQA is to combine the machine translation system and the monolingual OpenQA system. In this paper, we consider two ways to use the machine translation system: first, TranslateTrain which translates the training dataset from the source language into target languages, and then trains standard OpenQA system on the trans2361 Language English Chinese French German Polish Portuguese Russian Tamil Ukrainian Avg. question len 18.82 36.83 20.09 14.61 14.49 17.66 14.21 13.29 16.73 Avg. document len 735.91 1159.28 913.72 450.65 256.87 482.74 503.28 200.45 584.93 Avg. paragraph num 10.54 8.66 25.95 8.85 5.34 8.42 10.36 13.78 25.09 Table 2: Average length of questions and documents (number of characters for Chinese, and number of words for other languages) and average number of paragraphs in various languages. Language Train Dev Test English 56,279 2,926 2,924 Chinese 2,532 2,535 French 1,946 1,749 German 3,895 3,804 Polish 924 922 Portuguese 359 348 Russian 3,590 3,490 Tamil 597 586 Ukrainian 589 615 Table 3: Statistics of the XQA dataset. lated data; second, Translate-Test in which an OpenQA system is built with the training data in the source language, and questions and retrieved articles are translated from target languages into the source language. For the OpenQA model, we select two state-ofthe-art models, including: Document-QA model, proposed by (Clark and Gardner, 2018), is a multi-layer neural network which consists of a shared bi-directional GRU layer, a bi-directional attention layer, and a selfattention layer to obtain the question and paragraph representations. To produce well-calibrated answer scores on each paragraph, Document-QA samples multiple paragraphs and applies a sharednormalization learning objective to them. BERT model (short for Bidirectional Encoder Representations from Transformers), proposed by (Devlin et al., 2019), aims to pre-train deep bidirectional representations by jointly conditioning on the context information in all layers. We use BERT to encode questions and paragraphs, and also adopt the shared-normalization learning objective on top to generate well-calibrated answer scores for it. These two translation-based methods are simple and effective, but still have some drawbacks. Both translate-train and translate-test methods rely heavily on the quality of the machine translation system. However, the quality of the machine translation system varies in different language pairs, depending on the size of parallel data and the similarity of the language pair. 3.2 Zero-shot Cross-lingual Method Zero-shot cross-lingual method uses a unified model for both source and target languages, which is trained with labeled data in the source language and then applied directly to the target language. In this paper, we select the widely-used multilingual BERT model since it has already been proved successful on reading comprehension benchmarks such as SQuAD (Devlin et al., 2019). Multilingual BERT is a multilingual version of BERT, which is trained with the Wikipedia dumps of the top 100 languages in Wikipedia. Similar to the monolingual OpenQA model, we also fine-tune the multilingual BERT model with the shared-normalization learning objective. 4 The XQA Dataset In this paper, we collect a novel dataset called XQA to support the cross-lingual OpenQA task. 4.1 Data Collection Wikipedia provides a daily “Did you know” box on the main page of various languages1, which contains several factual questions from Wikipedia editors, with links to the corresponding answers. This serves as a good source for cross-lingual OpenQA. We collect questions from this session, and use the entity name as well as its aliases from WikiData 2 knowledge base as golden answers. For each question, we retrieve top-10 Wikipedia articles ranked by BM25 as relevant documents. Examples in various languages are shown in Table 1. In Wikipedia articles, the entity name almost always appears at the very beginning of the document. The model may trivially predict the first few words, ignoring the true evidence in relevant documents. In order to avoid this, we remove the first paragraph from each document. In total, we collect 90, 610 questions in nine languages. For English, We keep around 3000 ques1For English: https://en.wikipedia.org/ wiki/Main_Page 2https://www.wikidata.org 2362 Language English French German Russian Tamil 1 human human human human human 2 taxon taxon taxon taxon literary work 3 film commune of France film film city 4 church film book book film 5 book book song archaeological site book 6 business enterprise song archaeological site battle chemical compound 7 song album business enterprise painting disease 8 album sovereign state painting song ethnic group 9 video game fossil taxon album literary work archaeological site 10 single single fossil taxon single chemical element Table 4: Top answer types in some languages. Language zh-en fr-en de-en pt-en ru-en THUMT 38.76 33.50 34.78 35.62 30.81 Google Trans 43.30 34.80 43.34 31.00 32.83 Table 5: BLEU score of some translation models. tions for development and test set respectively, and use the other questions as the training set. For other languages, we evenly split the questions into development and test set. The detailed statistics in each language are shown in Table 3. 4.2 Dataset Analysis We calculate the average length of questions and documents in different languages, and the results are shown in Table 2. The average question length for most languages falls in the range of 10 to 20. The average question length in all languages is 18.97. The documents on the XQA dataset are considerable long, containing 703.62 tokens and 11.02 paragraphs on average. Documents in Tamil and Polish are among the shortest, with an average length of 200.45 and 256.87 respectively. Documents in French and Ukrainian contain much more paragraphs than documents in other languages. To understand whether questions in different languages have different topic distributions, we match the answers in WikiData, and obtain their types accordingly (Note that many answers either cannot be matched to WikiData entity or do not have a type label in WikiData). The top answer types in some of the languages from WikiData are displayed in Table 4. As we can see, there are some common topics across all languages, with human ranking first, and film and book ranking high. Besides, many questions in French are related to commune of France, while the topic battle ranks high in Russian. This indicates that XQA captures different data distributions for different languages, which may be influenced by cultural differences to some extent. 5 Experiments 5.1 Implementation Details In translate-test setting, we use our own translation system THUMT 3 (Zhang et al., 2017) to translate German, French, Portuguese, Russian, and Chinese data into English. Google Translate is used for Polish, Ukrainian, and Tamil as they are not supported by our translation system. Since it is very time-consuming to translate the large training data, we only perform the translate-train experiment for two selected languages, i.e., German and Chinese, using our translation system. To give an idea of the performance of the translation models, we report the BLEU scores in some public benchmarks in Table 5. To handle multiple paragraphs for a single question, following Clark and Gardner (2018), we adopt shared-normalization as the training objective on sampling paragraphs as training object for all models. Documents are restructured by merging consecutive paragraphs up to 400 tokens. During testing, the model is run on top-5 restructured paragraphs separately, and the answer span with the highest score is chosen as the prediction. For DocumentQA model, we use the official implementation4 and follow the setting for TriviaQA-Wiki in (Clark and Gardner, 2018). We use GloVe 300-dimensional word vector in Translate-Test setting, and 300-dimensional Skipgram word vector trained on Chinese/German Wikipedia dumps in Translate-Train setting. Our BERT model is similar to the BERT model for SQuAD in (Devlin et al., 2019), but we use shared-normalization on sampling paragraphs during training. We use the BASE setting 3http://thumt.thunlp.org 4https://github.com/allenai/ document-qa 2363 Model Translate-Test Translate-Train Zero-shot DocQA BERT DocQA BERT Multilingual BERT Languages EM F1 EM F1 EM F1 EM F1 EM F1 English 32.32 38.29 33.72 40.51 32.32 38.29 33.72 40.51 30.85 38.11 Chinese 7.17 17.20 9.81 23.05 7.45 18.73 18.93 31.50 25.88 39.53 French 11.19 18.97 15.42 26.13 23.34 31.08 German 12.98 19.15 16.84 23.65 11.23 15.08 19.06 24.33 21.42 26.87 Polish 9.73 16.51 13.62 22.18 16.27 21.87 Portuguese 10.03 15.86 13.75 21.27 18.97 23.95 Russian 5.01 9.62 7.34 13.61 10.38 13.44 Tamil 2.20 6.41 4.58 10.15 10.07 14.25 Ukrainian 7.94 14.07 10.53 17.72 15.12 20.82 Table 6: Overall results on the XQA dataset. Language Top-1 Top-5 Top-10 English 57.98 73.28 77.48 Chinese 51.21 66.35 70.52 French 49.58 69.12 74.59 German 41.86 55.90 60.14 Polish 31.52 46.75 52.60 Portuguese 35.21 51.34 57.57 Russian 28.88 43.87 49.77 Tamil 43.95 56.72 60.44 Ukrainian 43.85 60.22 65.12 Table 7: Retrieval performance on the XQA dataset. with a maximum sequence length of 512. The translate-test model is initialized with the public released “BERT-Base, Cased” pretrained model, while translate-train and multilingual BERT models are initialized with the “BERT-Base, Multilingual Cased” model. The widely accepted exact match (EM) and F1 over tokens in the answer(s) are used as the evaluation metrics. In translate-test setting, we translate the golden answers from the target languages into English, and report the results based on the translated answers. 5.2 Retrieval Results First, we show the retrieval performance for different languages in Table 7. As we can see, the retrieval performance varies for questions from different language sets. The retrieval results for questions from English, French and Chinese set are among the best, while answers to questions from Portuguese, Polish and Russian set are much harder to retrieve. Figure 1 suggests that as the question length increases, the retrieval performance in all languages grows. This is not difficult to understand, because longer questions will provide more information and make the retrieval problem easier. 10 20 30 40 50 60 70 80 90 100 Percentage of Question Length 20 30 40 50 60 70 80 90 Retrieval Performance (%) English Chinese French German Russian Figure 1: Retrieval performance over different question lengths. 5.3 Overall Results Table 6 shows the overall results for different methods in different languages. There is a large gap between the performance of English and that of other target languages, which implies that the task of cross-lingual OpenQA is difficult. In the English test set, the performance of the multilingual BERT model is worse than that of the monolingual BERT model. In almost all target languages, however, the multilingual model achieves the best result, manifesting its ability in capturing answers for questions across various languages. When we compare DocumentQA to BERT, although they have similar performance in English, BERT consistently outperforms DocumentQA by a large margin in all target languages in both translate-test and translate-train settings. We conjecture that it is because the BERT model, which has been pretrained on large-scale unlabeled text data, has better generalization power, and could better handle the different distributions between 2364 Translate-Test BERT Multilingual BERT Languages EM F1 EM F1 Chinese 12.50 26.53 35.93 48.49 French 22.45 33.35 31.21 39.23 German 32.22 41.67 36.67 43.58 Polish 28.21 37.22 31.17 37.41 Portuguese 25.81 35.10 33.68 39.52 Russian 14.77 24.95 21.11 25.67 Tamil 5.20 14.30 16.95 22.65 Ukrainian 16.89 30.30 24.26 32.18 Table 8: Reading comprehension performance. Languages Genetic dist. Pct. of easy EM German 30.8 19.09 36.67 Chinese 82.4 33.24 35.93 Portuguese 59.8 29.03 33.68 French 48.7 23.37 31.21 Polish 66.9 17.70 31.17 Ukrainian 60.3 21.18 24.26 Russian 60.3 18.56 21.11 Tamil 96.5 17.63 16.95 Table 9: Performance with respect to language distance and percentage of “easy” questions. the original English training data and the machine translated test data. Translate-train methods outperform translatetest methods in all cases except for DocumentQA in German. This may be due to the fact that DocumentQA uses space-tokenized words as basic units. In German, there is no space between compound words, resulting in countless possible combinations. Therefore, many of the words in translate-train German data do not have pretrained word vectors. On the contrary, using WordPiece tokenizer, BERT is not influenced by this. 6 Discussion 6.1 Reading Comprehension Results across Different Languages To remove the influence of retrieval, and compare the reading comprehension performance across different target languages, we conduct experiments on a subset of questions whose answers can be found in the top-10 retrieved documents. As BERT consistently outperforms DocumentQA in translation-based methods, we only report the result of BERT model in Table 8. We assume that the reading comprehension performance in the target language depends on two factors, the degree of similarity between the target language and the source language (i.e. English), and the intrinsic difficulty of the question set in the target language. Figure 2: Performance difference (EM) between translate-test BERT and multilingual BERT, along with the percentage of translation mismatch for answers. To quantify the intrinsic difficulty of the question sets in different languages, we calculate the percentage of questions whose answers can be found in the sentence that shares the most words with the question. We refer those questions as “easy” questions, and use the percentage of those questions as a rough indicator of how hard the subset is. To measure the degree of similarity between the target language and English, we use the genetic distance of the language pair given by eLinguistics.net 5. In their model, the score calculation for two languages is based on the comparison of the consonants in certain well-chosen words. The quantification of the consonant relationship is established partially with data from (Brown et al., 2013). The larger the distance is, the less similar English and the target language are. The results in Table 9 verify our assumption. The performance of different languages generally decreases as the genetic distance grows. The exceptions are Chinese and Portuguese since the percentages of “easy” questions in them are significantly higher than those in other languages. For languages that have similar genetic distances with English (i.e. Russian, Ukrainian, and Portuguese), the performance increases as the percentage of “easy” questions grows. 6.2 Limitation of Translation-based Method Our experiments demonstrate that translationbased methods do not perform well in crosslingual OpenQA task. Particularly, we observe 5http://www.elinguistics.net 2365 a large gap between the results of multilingual BERT and translate-test BERT for Chinese and Tamil. Through error analysis, we find that for a large portion of questions in Chinese and Tamil, the answers are translated into different forms under different conditions (i.e. with context and without context). This significantly decreases the metric numbers of translation-based systems in these languages. In Figure 2, we show the difference of reading comprehension performance (EM) between translate-test BERT and multilingual BERT, along with the percentage of questions whose answers are translated into different forms in the documents. As we can see, there is a correlation between the two variables. In fact, the performance of translation-based method depends heavily on the translation quality of name entities. As we know, name entities are critical for question answering. For many factual questions, the answers are either name entities themselves, or highly related to name entities (i.e. the property of a name entity). Translation error or inconsistency of name entities would significantly hurt the performance of translation-based cross-lingual OpenQA system. As shown in Figure 3, the name entity “未央宫(Weiyang Palace)” is incorrectly translated as “Fuyang Palace” in the question, while correctly translated in the retrieved document. In addition, as we can see from the underlined parts, highly similar expressions in the question and the retrieved document are translated into largely different ones. Compared to other words or phrases which occur more frequently in the training corpus, name entities are more flexible and various, and thus have worse translation results from prevailing Neural Machine Translation systems (Li et al., 2018). While some work has focused on solving this problem (Hassan et al., 2007; Jiang et al., 2007; Grundkiewicz and Heafield, 2018; Li et al., 2018), it remains largely underresearched. With a translation system that handles name entities better, we can potentially obtain better results from translation-based methods. 6.3 Zero-shot Cross-lingual Method Trained on pure English data without the involvement of machine translation systems, much effort has been saved using zero-shot cross-lingual methods. Moreover, a single model could be applied directly to various languages. Thus, compared to Origin Question: <Query>位于汉长安城外西南侧,与未央宫 之间曾有跨越城墙的复道相连? Retrieved Text: ...在长安城外修建了建章宫...并且与未 央宫之间有跨越宫墙和城墙的复道相通... Answer: 建章宫 Translation Result Question: <Query> is located on the southwest side of Han Chang'an City. It is connected with the Fuyang Palace. Retrieved Text: ... and built a Jianzhang Palace outside Chang'an City ... and there is a cross between the Weiyang Palace and the city wall ... Answer: Jianzhang Palace Figure 3: Example of translation error of name entity. subset English Chinese ∆ easy 58.30 52.48 -5.82 ( -9.98%) other 38.42 28.77 -9.65 (-25.11%) Table 10: Reading comprehension performance for English and Chinese. translation-based methods, zero-shot cross-lingual method seems to be a more practical way to build a cross-lingual OpenQA system. Although trained and tested in different languages, the multilingual BERT model achieves relatively good results on the XQA dataset. This may indicate that multilingual BERT could transfer the ability of capturing some common interaction patterns between different text across different languages via pretraining a unified text encoder. To further investigate the cross-lingual transfer power of multilingual BERT, we examine the difference of reading comprehension performance between English and Chinese test sets, for “easy” questions and other questions respectively. Results in Table 10 show the performance gap between the source language and the target language for “easy” questions is much smaller than that for other questions. This may indicate that multilingual BERT better captures shallow matching information across different languages. Despite multilingual BERT has been proved to have certain power in cross-lingual understanding, no parallel data is used in it. Another line of research extracts multilingual representation from the context vector of NMT models that are trained on parallel data (Schwenk and Douze, 2017; Artetxe and Schwenk, 2018), which may be complementary to multilingual BERT. Very recently, Lample and Conneau (2019) proposed a multilin2366 gual language model that leveraged both monolingual and parallel data. Incorporating monolingual and parallel data may help to improve the performance in cross-lingual OpenQA. 7 Conclusion In this paper, we discuss the problem of crosslingual open-domain question answering, and present a novel dataset XQA, which consists of a total amount of 90k question-answer pairs in nine languages. We further examine the performance of two translation-based methods and one zero-shot cross-lingual method on the XQA dataset. The experimental results show that multilingual BERT achieves the best result in almost all target languages. The performance of translation-based methods can be increased by applying machine translation system that better translates name entities, while the multilingual BERT model may be improved by incorporating parallel data with monolingual data. We hope our work could contribute to the development of cross-lingual OpenQA systems and further promote the research of overall cross-lingual language understanding. Acknowledgement This research is jointly supported by the NSFC project under the grant no. 61661146007 and the NExT++ project, the National Research Foundation, Prime Minister’s Office, Singapore under its IRC@Singapore Funding Initiative. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of AAAI, pages 5012–5019. Mikel Artetxe and Holger Schwenk. 2018. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. arXiv preprint arXiv:1812.10464. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Cecil H Brown, Eric W Holman, and Søren Wichmann. 2013. Sound correspondences in the world’s languages. Language, pages 4–29. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of SemEval-2017, pages 1–14, Vancouver, Canada. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/daily mail reading comprehension task. In Proceedings of ACL, pages 2358–2367, Berlin, Germany. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of ACL, pages 1870–1879, Vancouver, Canada. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of ACL, pages 209– 220, Vancouver, Canada. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of ACL, pages 845–855, Melbourne, Australia. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018a. Word translation without parallel data. In Proceedings of ICLR. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of EMNLP, pages 2475–2485, Brussels, Belgium. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In Proceedings of ACL, pages 593–602, Vancouver, Canada. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171– 4186, Minneapolis, Minnesota. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017. Gatedattention readers for text comprehension. In Proceedings of ACL, pages 1832–1846, Vancouver, Canada. Cristina Espana-Bonet, Ádám Csaba Varga, Alberto Barrón-Cedeño, and Josef van Genabith. 2017. An empirical analysis of nmt-derived interlingual embeddings and their use in parallel sentence identification. IEEE Journal of Selected Topics in Signal Processing, 11(8):1340–1350. 2367 Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. In Proceedings of ICML, volume 37, pages 748–756, Lille, France. B Green, A Wolf, C Chomsky, and K Laughery. 1986. Readings in natural language processing. pages 545–549, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Roman Grundkiewicz and Kenneth Heafield. 2018. Neural machine translation techniques for named entity transliteration. In Proceedings of the Seventh Named Entities Workshop, pages 89–94, Melbourne, Australia. Association for Computational Linguistics. Ahmed Hassan, Haytham Fahmy, and Hany Hassan. 2007. Improving named entity translation by exploiting comparable and parallel corpora. In Proceedings of Workshop in AMML. Long Jiang, Ming Zhou, Lee-Feng Chien, and Cheng Niu. 2007. Named entity translation with web mining and transliteration. In Proceedings of IJCAI, pages 1629–1634, San Francisco, CA, USA. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING, pages 1459–1474, Mumbai, India. Cody Kwok, Oren Etzioni, Oren Etzioni, and Daniel S. Weld. 2001. Scaling question answering to the web. ACM Transactions on Information Systems, 19(3):242–262. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proceedings of ICLR. Zhongwei Li, Xuancong Wang, AiTi Aw, Eng Siong Chng, and Haizhou Li. 2018. Named-entity tagging and domain adaptation for better customized translation. In Proceedings of the Seventh Named Entities Workshop, pages 41–46, Melbourne, Australia. Association for Computational Linguistics. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of ACL, pages 1736–1745, Melbourne, Australia. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151–159, Denver, Colorado. Association for Computational Linguistics. Bernardo Magnini, Danilo Giampiccolo, Pamela Forner, Christelle Ayache, Valentin Jijkoun, Petya Osenova, Anselmo Peñas, Paulo Rocha, Bogdan Sacaleanu, and Richard Sutcliffe. 2006. Overview of the clef 2006 multilingual question answering track. In Proceedings of Workshop of CLEF, pages 223–256. Springer. Bernardo Magnini, Alessandro Vallin, Christelle Ayache, Gregor Erbach, Anselmo Peñas, Maarten De Rijke, Paulo Rocha, Kiril Simov, and Richard Sutcliffe. 2004. Overview of the clef 2004 multilingual question answering track. In Proceedings of Workshop of CLEF, pages 371–391. Springer. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of ACL, pages 1725–1735, Melbourne, Australia. Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In Proceedings of NAACL, pages 3795–3805, Minneapolis, Minnesota. Holger Schwenk and Matthijs Douze. 2017. Learning joint multilingual sentence representations with neural machine translation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 157–167, Vancouver, Canada. Association for Computational Linguistics. Holger Schwenk and Xian Li. 2018. A corpus for multilingual document classification in eight languages. In Proceedings of LREC, Miyazaki, Japan. Alessandro Vallin, Bernardo Magnini, Danilo Giampiccolo, Lili Aunimo, Christelle Ayache, Petya Osenova, Anselmo Peñas, Maarten De Rijke, Bogdan Sacaleanu, Diana Santos, et al. 2005. Overview of the clef 2005 multilingual question answering track. In Proceedings of Workshop of CLEF, pages 307–331. Springer. Ellen M Voorhees et al. 1999. The TREC-8 question answering track report. In Proceedings of TREC, pages 77–82. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018a. R3: Reinforced ranker-reader for open-domain question answering. In Proceedings of AAAI, pages 5981– 5988. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2018b. Evidence aggregation for answer re-ranking in open-domain question answering. In Proceedings of ICLR. 2368 Jiacheng Zhang, Yanzhuo Ding, Shiqi Shen, Yong Cheng, Maosong Sun, Huanbo Luan, and Yang Liu. 2017. THUMT: An open source toolkit for neural machine translation. arXiv preprint arXiv:1706.06415. Meng Zhang, Yang Liu, Huanbo Luan, Maosong Sun, Tatsuya Izuha, and Jie Hao. 2016. Building earth mover’s distance on bilingual word embeddings for machine translation. In Proceedings of AAAI, pages 2870–2876. AAAI Press.
2019
227
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2369–2385 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2369 Compound Probabilistic Context-Free Grammars for Grammar Induction Yoon Kim Harvard University Cambridge, MA, USA [email protected] Chris Dyer DeepMind London, UK [email protected] Alexander M. Rush Harvard University Cambridge, MA, USA [email protected] Abstract We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our context-free rule probabilities are modulated by a per-sentence continuous latent variable, which induces marginal dependencies beyond the traditional context-free assumptions. Inference in this grammar is performed by collapsed variational inference, in which an amortized variational posterior is placed on the continuous variable, and the latent trees are marginalized with dynamic programming. Experiments on English and Chinese show the effectiveness of our approach compared to recent state-of-the-art methods for grammar induction from words with neural language models. 1 Introduction Grammar induction is the task of inducing hierarchical syntactic structure from data. Statistical approaches to grammar induction require specifying a probabilistic grammar (e.g. formalism, number and shape of rules), and fitting its parameters through optimization. Early work found that it was difficult to induce probabilistic context-free grammars (PCFG) from natural language data through direct methods, such as optimizing the log likelihood with the EM algorithm (Lari and Young, 1990; Carroll and Charniak, 1992). While the reasons for the failure are manifold and not completely understood, two major potential causes are the ill-behaved optimization landscape and the overly strict independence assumptions of PCFGs. More successful approaches to grammar induction have thus resorted to carefully-crafted auxiliary objectives (Klein and Manning, 2002), priors or Code: https://github.com/harvardnlp/compound-pcfg non-parametric models (Kurihara and Sato, 2006; Johnson et al., 2007; Liang et al., 2007; Wang and Blunsom, 2013), and manually-engineered features (Huang et al., 2012; Golland et al., 2012) to encourage the desired structures to emerge. We revisit these aforementioned issues in light of advances in model parameterization and inference. First, contrary to common wisdom, we find that parameterizing a PCFG’s rule probabilities with neural networks over distributed representations makes it possible to induce linguistically meaningful grammars by simply optimizing log likelihood. While the optimization problem remains non-convex, recent work suggests that there are optimization benefits afforded by over-parameterized models (Arora et al., 2018; Xu et al., 2018; Du et al., 2019), and we indeed find that this neural PCFG is significantly easier to optimize than the traditional PCFG. Second, this factored parameterization makes it straightforward to incorporate side information into rule probabilities through a sentence-level continuous latent vector, which effectively allows different contexts in a derivation to coordinate. In this compound PCFG—continuous mixture of PCFGs—the context-free assumptions hold conditioned on the latent vector but not unconditionally, thereby obtaining longer-range dependencies within a tree-based generative process. To utilize this approach, we need to efficiently optimize the log marginal likelihood of observed sentences. While compound PCFGs break efficient inference, if the latent vector is known the distribution over trees reduces to a standard PCFG. This property allows us to perform grammar induction using a collapsed approach where the latent trees are marginalized out exactly with dynamic programming. To handle the latent vector, we employ standard amortized inference using reparameterized samples from a variational 2370 posterior approximated from an inference network (Kingma and Welling, 2014; Rezende et al., 2014). On standard benchmarks for English and Chinese, the proposed approach is found to perform favorably against recent neural network-based approaches to grammar induction (Shen et al., 2018, 2019; Drozdov et al., 2019; Kim et al., 2019). 2 Probabilistic Context-Free Grammars We consider context-free grammars (CFG) consisting of a 5-tuple G = (S, N, P, Σ, R) where S is the distinguished start symbol, N is a finite set of nonterminals, P is a finite set of preterminals,1 Σ is a finite set of terminal symbols, and R is a finite set of rules of the form, S →A, A ∈N A →B C, A ∈N, B, C ∈N ∪P T →w, T ∈P, w ∈Σ. A probabilistic context-free grammar (PCFG) consists of a grammar G and rule probabilities π = {πr}r∈R such that πr is the probability of the rule r. Letting TG be the set of all parse trees of G, a PCFG defines a probability distribution over t ∈TG via pπ(t) = Q r∈tR πr where tR is the set of rules used in the derivation of t. It also defines a distribution over string of terminals x ∈Σ∗via pπ(x) = X t∈TG(x) pπ(t), where TG(x) = {t | yield(t) = x}, i.e. the set of trees t such that t’s leaves are x. We will use pπ(t | x) ≜pπ(t | yield(t) = x) to denote the posterior distribution over latent trees given the observed sentence x. Parameterization The standard way to parameterize a PCFG is to simply associate a scalar to each rule πr with the constraint that they form valid probability distributions, i.e. each nonterminal is associated with a fully-parameterized categorical distribution over its rules. This direct parameterization is algorithmically convenient since the M-step in the EM algorithm (Dempster et al., 1977) has a closed form. However, there is a long history of work showing that it is difficult to learn meaningful grammars from natural language data with this parameterization (Carroll and 1Since we will be inducing a grammar directly from words, P is roughly the set of part-of-speech tags and N is the set of constituent labels. However, to avoid issues of label alignment, evaluation is only on the tree topology. Charniak, 1992).2 Successful approaches to unsupervised parsing have therefore modified the model/learning objective by guiding potentially unrelated rules to behave similarly. Recognizing that sharing among rule types is beneficial, we propose a neural parameterization where rule probabilities are based on distributed representations. We associate embeddings with each symbol, introducing input embeddings wN for each symbol N on the left side of a rule (i.e. N ∈{S} ∪N ∪P). For each rule type r, πr is parameterized as follows, πS→A = exp(u⊤ A f1(wS)) P A′∈N exp(u⊤ A′ f1(wS)), πA→BC = exp(u⊤ BC wA) P B′C′∈M exp(u⊤ B′C′ wA), πT→w = exp(u⊤ w f2(wT )) P w′∈Σ exp(u⊤ w′ f2(wT )), where M is the product space (N ∪P)×(N ∪P), and f1, f2 are MLPs with two residual layers (see appendix A.1 for the full parameterization). We will use EG = {wN | N ∈{S} ∪N ∪P} to denote the set of input symbol embeddings for a grammar G, and λ to refer to the parameters of the neural network used to obtain the rule probabilities. A graphical model-like illustration of the neural PCFG is shown in Figure 1 (left). It is clear that the neural parameterization does not change the underlying probabilistic assumptions. The difference between the two is analogous to the difference between count-based vs. feed-forward neural language models, where feedforward neural language models make the same Markov assumptions as the count-based models but are able to take advantage of shared, distributed representations. 3 Compound PCFGs A compound probability distribution (Robbins, 1951) is a distribution whose parameters are themselves random variables. These distributions generalize mixture models to the continuous case, for example in factor analysis which assumes the following generative process, z ∼N(0, I) x ∼N(Wz, Σ). Compound distributions provide the ability to model rich generative processes, but marginalizing over the latent parameter can be computationally expensive unless conjugacy can be exploited. 2In preliminary experiments we were indeed unable to learn linguistically meaningful grammars with this PCFG. 2371 A1 A2 T3 T1 T2 w1 w2 w3 πS πN πP EG N A1 A2 T3 T1 T2 w1 w2 w3 z γ c πz,S πz,N πz,P EG N Figure 1: A graphical model-like diagram for the neural PCFG (left) and the compound PCFG (right) for an example tree structure. In the above, A1, A2 ∈N are nonterminals, T1, T2, T3 ∈P are preterminals, w1, w2, w3 ∈ Σ are terminals. In the neural PCFG, the global rule probabilities π = πS ∪πN ∪πP are the output from a neural net run over the symbol embeddings EG, where πN are the set of rules with a nonterminal on the left hand side (πS and πP are similarly defined). In the compound PCFG, we have per-sentence rule probabilities πz = πz,S ∪πz,N ∪πz,P obtained from running a neural net over a random vector z (which varies across sentences) and global symbol embeddings EG. In this case, the context-free assumptions hold conditioned on z, but they do not hold unconditionally: e.g. when conditioned on z and A2, the variables A1 and T1 are independent; however when conditioned on just A2, they are not independent due to the dependence path through z. Note that the rule probabilities are random variables in the compound PCFG but deterministic variables in the neural PCFG. In this work, we study compound probabilistic context free grammars whose distribution over trees arises from the following generative process: we first obtain rule probabilities via z ∼pγ(z), πz = fλ(z, EG), where pγ(z) is a prior with parameters γ (spherical Gaussian in this paper), and fλ is a neural network that concatenates the input symbol embeddings with z and outputs the sentence-level rule probabilities πz, πz,S→A ∝exp(u⊤ A f1([wS; z])), πz,A→BC ∝exp(u⊤ BC [wA; z]), πz,T→w ∝exp(u⊤ w f2([wT ; z])), where [w; z] denotes vector concatenation. Then a tree/sentence is sampled from a PCFG with rule probabilities given by πz, t ∼PCFG(πz), x = yield(t). This can be viewed as a continuous mixture of PCFGs, or alternatively, a Bayesian PCFG with a prior on sentence-level rule probabilities parameterized by z, λ, EG.3 Importantly, under this generative model the context-free assumptions hold conditioned on z, but they do not hold unconditionally. This is shown in Figure 1 (right) where there is a dependence path through z if it is not conditioned upon. Compound PCFGs give rise to a marginal distribution over parse trees t via pθ(t) = Z p(t | z)pγ(z) dz, 3Under the Bayesian PCFG view, pγ(z) is a distribution over z (a subset of the prior), and is thus a hyperprior. where pθ(t | z) = Q r∈tR πz,r. The subscript in πz,r denotes the fact that the rule probabilities depend on z. Compound PCFGs are clearly more expressive than PCFGs as each sentence has its own set of rule probabilities. However, it still assumes a tree-based generative process, making it possible to learn latent tree structures. Our motivation for the compound PCFG is based on the observation that for grammar induction, first-order context-free assumptions are generally made not because they represent an adequate model of natural language, but because they allow for tractable training.4 Higher-order PCFGs can introduce dependencies between children and ancestors/siblings through, for example, vertical/horizontal Markovization (Johnson, 1998; Klein and Manning, 2003). However such dependencies complicate training due to the rapid increase in the number of rules. Under this view, we can interpret the compound PCFG as a restricted version of some higher-order PCFG where a child can depend on its ancestors and siblings through a shared latent vector. We hypothesize that this dependence among siblings is especially useful in grammar induction from words, where (for example) if we know that watched is used as a verb 4A piece of evidence for the misspecification of first-order PCFGs as a statistical model of natural language is that if one pretrains a first-order PCFG on supervised data and continues training with the unsupervised objective (i.e. log marginal likelihood), the resulting grammar deviates significantly from the supervised initial grammar while the log marginal likelihood improves (Johnson et al., 2007). Similar observations have been made for part-of-speech induction with Hidden Markov Models (Merialdo, 1994). 2372 then the noun phrase is likely to be a movie. In contrast to the usual Bayesian treatment of PCFGs which places priors on global rule probabilities (Kurihara and Sato, 2006; Johnson et al., 2007; Wang and Blunsom, 2013), the compound PCFG assumes a prior on local, sentence-level rule probabilities. It is therefore closely related to the Bayesian grammars studied by Cohen et al. (2009) and Cohen and Smith (2009), who also sample local rule probabilities from a logistic normal prior for training dependency models with valence (DMV) (Klein and Manning, 2004). Inference in Compound PCFGs The expressivity of compound PCFGs comes at a significant challenge in learning and inference. Letting θ = {EG, λ} be the parameters of the generative model, we would like to maximize the log marginal likelihood of the observed sentence log pθ(x). In the neural PCFG the log marginal likelihood log pθ(x) = log P t∈TG(x) pθ(t) can be obtained by summing out the latent tree structure using the inside algorithm (Baker, 1979), which is differentiable and thus amenable to gradientbased optimization. In the compound PCFG, the log marginal likelihood is given by log pθ(x) = log  Z X t∈TG(x) pθ(t | z)pγ(z) dz  . Notice that while the integral over z makes this quantity intractable, when we condition on z, we can tractably perform the inner summation as before using the inside algorithm. We therefore resort to collapsed amortized variational inference. We first obtain a sample z from a variational posterior distribution (given by an amortized inference network), then perform the inner marginalization conditioned on this sample. The evidence lower bound ELBO(θ, φ; x) is then given by, Eqφ(z | x)[log pθ(x | z)] −KL[qφ(z | x) ∥pγ(z)], and we can calculate pθ(x | z) = P t∈TG(x) p(t | z) with the inside algorithm given a sample z from a variational posterior qφ(z | x). For the variational family we use a diagonal Gaussian where the mean/log-variance vectors are given by an affine layer over maxpooled hidden states from an LSTM over x. We can obtain low-variance estimators for the gradient ∇θ,φ ELBO(θ, φ; x) by using the reparameterization trick for the expected reconstruction likelihood and the analytical expression for the KL term (Kingma and Welling, 2014). We remark that under the Bayesian PCFG view, since the parameters of the prior (i.e. θ) are estimated from the data, our approach can be seen as an instance of empirical Bayes (Robbins, 1956).5 MAP Inference After training, we are interested in comparing the learned trees against an annotated treebank. This requires inferring the most likely tree given a sentence, i.e. argmaxt pθ(t | x). For the neural PCFG we can obtain the most likely tree by using the Viterbi version of the inside algorithm (CKY algorithm). For the compound PCFG, the argmax is intractable to obtain exactly, and hence we estimate it with the following approximation, argmax t Z pθ(t | x, z)pθ(z | x) dz ≈argmax t pθ t | x, µφ(x)  , where µφ(x) is the mean vector from the inference network. The above approximates the true posterior pθ(z | x) with δ(z −µφ(x)), the Dirac delta function at the mode of the variational posterior.6 This quantity is tractable to estimate as in the PCFG case. Other approximations are possible: for example we could use qφ(z | x) as an importance sampling distribution to estimate the first integral. However we found the above approximation to be efficient and effective in practice. 4 Experimental Setup Data We test our approach on the Penn Treebank (PTB) (Marcus et al., 1993) with the standard splits (2-21 for training, 22 for validation, 23 for test) and the same preprocessing as in recent works (Shen et al., 2018, 2019), where we discard punctuation, lowercase all tokens, and take the top 10K most frequent words as the vocabulary. This task is more challenging than traditional setups, which usually experiment on shorter sentences and use gold part-of-speech tags. We further experiment on Chinese with version 5.1 of the Chinese Penn Treebank (CTB) (Xue et al., 2005), with the same splits as in Chen and Manning (2014). On CTB we also remove punctuation and keep the top 10K words. Hyperparameters Our PCFG uses 30 nonterminals and 60 preterminals, with 256-dimensional 5See Berger (1985) (chapter 4), Zhang (2003), and Cohen (2016) (chapter 3) for further discussion on compound models and empirical Bayes. 6Since pθ(t | x, z) is continuous with respect to z, we have R pθ(t | x, z)δ(z −µφ(x)) dz = pθ t | x, µφ(x)  . 2373 symbol embeddings. The compound PCFG uses 64-dimensional latent vectors. The bidirectional LSTM inference network has a single layer with 512 dimensions, and the mean and the log variance vector for qφ(z | x) are given by max-pooling the hidden states of the LSTM and passing it through an affine layer. Model parameters are initialized with Xavier uniform initialization. For training we use Adam (Kingma and Ba, 2015) with β1 = 0.75, β2 = 0.999 and learning rate of 0.001, with a maximum gradient norm limit of 3. We train for 10 epochs with batch size equal to 4. We employ a curriculum learning strategy (Bengio et al., 2009) where we train only on sentences of length up to 30 in the first epoch, and increase this length limit by 1 each epoch. This slightly improved performance and similar strategies have used in the past for grammar induction (Spitkovsky et al., 2012). During training we perform early stopping based on validation perplexity.7 To mitigate against overfitting to PTB, experiments on CTB utilize the same hyperparameters from PTB. Baselines and Evaluation We observe that even on PTB, there is enough variation in setups across prior work on grammar induction to render a meaningful comparison difficult. Some important dimensions along which prior works vary include, (1) lexicalization: earlier work on grammar induction generally assumed gold (or induced) partof-speech tags (Klein and Manning, 2004; Smith and Eisner, 2004; Bod, 2006; Snyder et al., 2009), while more recent works induce grammar directly from words (Spitkovsky et al., 2013; Shen et al., 2018); (2) use of punctuation: even within papers that induce a grammar directly from words, some papers employ heuristics based on punctuation as punctuation is usually a strong signal for start/end of constituents (Seginer, 2007; Ponvert et al., 2011; Spitkovsky et al., 2013), some train with punctuation (Jin et al., 2018; Drozdov et al., 2019; Kim et al., 2019), while others discard punctuation altogether for training (Shen et al., 2018, 2019); (3) train/test data: some works do not explicitly separate out train/test sets (Reichart and Rappoport, 2010; Golland et al., 2012) while some do (Huang et al., 2012; Parikh et al., 2014; Htut 7However, we used F1 against validation trees on PTB to select some hyperparameters (e.g. grammar size), as is sometimes done in grammar induction. Hence our PTB results are arguably not fully unsupervised in the strictest sense of the term. The hyperparameters of the PRPN/ON baselines are also tuned using validation F1 for fair comparison. et al., 2018). Maintaining train/test splits is less of an issue for unsupervised structure learning, however in this work we follow the latter and separate train/test data. (4) evaluation: for unlabeled F1, almost all works ignore punctuation (even approaches that use punctuation during training typically ignore them during evaluation), but there is some variance in discarding trivial spans (widthone and sentence-level spans) and using corpuslevel versus sentence-level F1.8 In this paper we discard trivial spans and evaluate on sentencelevel F1 per recent work (Shen et al., 2018, 2019). Given the above, we mainly compare our approach against two recent, strong baselines with open source code: Parsing Predict Reading Network (PRPN)9 (Shen et al., 2018) and Ordered Neurons (ON)10 (Shen et al., 2019). These approaches train a neural language model with gated attention-like mechanisms to induce binary trees, and achieve strong unsupervised parsing performance even when trained on corpora where punctuation is removed. Since the original results were on both language modeling and grammar induction, their hyperparameters were presumably tuned to do well on both and thus may not be optimal for just unsupervised parsing. We therefore tune the hyperparameters of these baselines for unsupervised parsing only (i.e. on validation F1). 5 Results and Discussion Table 1 shows the unlabeled F1 scores for our models and various baselines. All models soundly outperform right branching baselines, and we find that the neural PCFG/compound PCFG are strong models for grammar induction. In particular the compound PCFG outperforms other models by an appreciable margin on both English and Chinese. We again note that we were unable to induce meaningful grammars through a traditional PCFG with the scalar parameterization.11 See appendix A.2 for the full results (including corpuslevel F1) broken down by sentence length. Table 2 analyzes the learned tree structures. We compare similarity as measured by F1 against gold, left, right, and “self” trees (top), where self F1 score is calculated by averaging over all 6 pairs 8Corpus-level F1 calculates precision/recall at the corpus level to obtain F1, while sentence-level F1 calculates F1 for each sentence and averages across the corpus. 9https://github.com/yikangshen/PRPN 10https://github.com/yikangshen/Ordered-Neurons 11The training perplexity was much higher than in the neural case, indicating significant optimization issues. 2374 PTB CTB Model Mean Max Mean Max PRPN (Shen et al., 2018) 37.4 38.1 − − ON (Shen et al., 2019) 47.7 49.4 − − URNNG† (Kim et al., 2019) − 45.4 − − DIORA† (Drozdov et al., 2019) − 58.9 − − Left Branching 8.7 9.7 Right Branching 39.5 20.0 Random Trees 19.2 19.5 15.7 16.0 PRPN (tuned) 47.3 47.9 30.4 31.5 ON (tuned) 48.1 50.0 25.4 25.7 Neural PCFG 50.8 52.6 25.7 29.5 Compound PCFG 55.2 60.1 36.0 39.8 Oracle Trees 84.3 81.1 Table 1: Unlabeled sentence-level F1 scores on PTB and CTB test sets. Top shows results from previous work while the rest of the results are from this paper. Mean/Max scores are obtained from 4 runs of each model with different random seeds. Oracle is the maximum score obtainable with binarized trees, since we compare against the non-binarized gold trees per convention. Results with † are trained on a version of PTB with punctuation, and hence not strictly comparable to the present work. For URNNG/DIORA, we take the parsed test set provided by the authors from their best runs and evaluate F1 with our evaluation setup. obtained from 4 different runs. We find that PRPN is particularly consistent across multiple runs. We also observe that different models are better at identifying different constituent labels, as measured by label recall (Table 2, bottom). While left as future work, this naturally suggests an ensemble approach wherein the empirical probabilities of constituents (obtained by averaging the predicted binary constituent labels from the different models) are used either to supervise another model or directly as potentials in a CRF constituency parser. Finally, all models seemed to have some difficulty in identifying SBAR/VP constituents which typically span more words than NP constituents. Induced Trees for Downstream Tasks While the compound PCFG has fewer independence assumptions than the neural PCFG, it is still a more constrained model of language than standard neural language models (NLM) and thus not competitive in terms of perplexity: the compound PCFG obtains a perplexity of 196.3 while an LSTM language model (LM) obtains 86.2 (Table 3).12 In contrast, both PRPN and ON perform as well as an 12We did manage to almost match the perplexity of an NLM by additionally conditioning the terminal probabilities on previous history, i.e. πz,T →wt ∝ exp(u⊤ w f2([wT ; z; ht])) where ht is the hidden state from an LSTM over x<t. However the unsupervised parsing performance was far worse (≈25 F1 on the PTB). PRPN ON PCFG Comp. PCFG Gold 47.3 48.1 50.8 55.2 Left 1.5 14.1 11.8 13.0 Right 39.9 31.0 27.7 28.4 Self 82.3 71.3 65.2 66.8 SBAR 50.0% 51.2% 52.5% 56.1% NP 59.2% 64.5% 71.2% 74.7% VP 46.7% 41.0% 33.8% 41.7% PP 57.2% 54.4% 58.8% 68.8% ADJP 44.3% 38.1% 32.5% 40.4% ADVP 32.8% 31.6% 45.5% 52.5% Table 2: (Top) Mean F1 similarity against Gold, Left, Right, and Self trees. Self F1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. (Bottom) Fraction of ground truth constituents that were predicted as a constituent by the models broken down by label (i.e. label recall). LSTM LM while maintaining good unsupervised parsing performance. We thus experiment to see if it is possible to use the induced trees to supervise a more flexible generative model that can make use of tree structures—namely, recurrent neural network grammars (RNNG) (Dyer et al., 2016). RNNGs are generative models of language that jointly model syntax and surface structure by incrementally generating a syntax tree and sentence. As with NLMs, RNNGs make no independence assumptions, and have been shown to outperform NLMs in terms of perplexity and grammaticality judgment when trained on gold trees (Kuncoro et al., 2018; Wilcox et al., 2019). We take the best run from each model and parse the training set,13 and use the induced trees to supervise an RNNG for each model using the parameterization from Kim et al. (2019).14 We are also interested in syntactic evaluation of our models, and for this we utilize the framework and dataset from Marvin and Linzen (2018), where a model is presented two minimally different sentences such as: the senators near the assistant are old *the senators near the assistant is old and must assign higher probability to grammatical sentence. Additionally, Kim et al. (2019) report perplexity improvements by fine-tuning an RNNG trained on gold trees with the unsupervised RNNG (URNNG)—whereas the RNNG is is trained to maximize the joint likelihood log p(x, t), the URNNG maximizes a lower bound on the log marginal likelihood log P t p(x, t) with a structured inference network that approximates the true 13The train/test F1 was similar for all models. 14https://github.com/harvardnlp/urnng 2375 PPL Syntactic Eval. F1 LSTM LM 86.2 60.9% − PRPN 87.1 62.2% 47.9 Induced RNNG 95.3 60.1% 47.8 Induced URNNG 90.1 61.8% 51.6 ON 87.2 61.6% 50.0 Induced RNNG 95.2 61.7% 50.6 Induced URNNG 89.9 61.9% 55.1 Neural PCFG 252.6 49.2% 52.6 Induced RNNG 95.8 68.1% 51.4 Induced URNNG 86.0 69.1% 58.7 Compound PCFG 196.3 50.7% 60.1 Induced RNNG 89.8 70.0% 58.1 Induced URNNG 83.7 76.1% 66.9 RNNG on Oracle Trees 80.6 70.4% 71.9 + URNNG Fine-tuning 78.3 76.1% 72.8 Table 3: Results from training RNNGs on induced trees from various models (Induced RNNG). Induced URNNG indicates fine-tuning with the URNNG. We show perplexity (PPL), grammaticality judgment performance (Syntactic Eval.), and unlabeled F1. PPL/F1 are on the PTB test set, while Syntactic Eval. is based on the dataset from Marvin and Linzen (2018). Note that the perplexity numbers here are not comparable to standard results on the PTB since our models are generative model of sentences and hence we do not carry information across sentence boundaries. posterior. We experiment with a similar approach where we fine-tune RNNGs trained on induced trees with URNNGs. We perform early stopping for both RNNG and URNNG based on validation perplexity. See appendix A.3 for further details regarding the experimental setup. The results are shown in Table 3. For perplexity, RNNGs trained on induced trees (Induced RNNG in Table 3) are unable to improve upon an LSTM LM,15 in contrast to the supervised RNNG which does outperform the LSTM language model (Table 3, bottom). For grammaticality judgment however, the RNNG trained with compound PCFG trees outperforms the LSTM LM despite obtaining worse perplexity,16 and performs on par with the RNNG trained on gold trees. Fine-tuning with the URNNG results in improvements in perplexity and grammaticality judgment across the board (Induced URNNG in Table 3). We also obtain large improvements on unsupervised parsing as measured by F1, with the fine-tuned URNNGs outperforming the respective original models.17 This is potentially due to an ensembling effect be15Under our RNNG parameterization, the LSTM LM is equivalent to an RNNG trained with right branching trees. 16Kuncoro et al. (2018) also find that lower perplexity does not always lead to better performance on syntactic evaluation. 17Li et al. (2019) similarly obtain improvements by refining a model trained on induced trees on classification tasks. Figure 2: Alignment of induced nonterminals ordered from top based on predicted frequency (therefore NT-04 is the most frequently-predicted nonterminal). For each nonterminal we visualize the proportion of correctly-predicted constituents that correspond to particular gold labels. For reference we also show the precision (i.e. probability of correctly predicting unlabeled constituents) in the rightmost column. tween the original model and the URNNG’s structured inference network, which is parameterized as a neural CRF constituency parser (Durrett and Klein, 2015; Liu et al., 2018).18 Model Analysis We analyze our best compound PCFG model in more detail. Since we induce a full set of nonterminals in our grammar, we can analyze the learned nonterminals to see if they can be aligned with linguistic constituent labels. Figure 2 visualizes the alignment between induced and gold labels, where for each nonterminal we show the empirical probability that a predicted constituent of this type will correspond to a particular linguistic constituent in the test set, conditioned on its being a correct constituent (for reference we also show the precision). We observe that some of the induced nonterminals clearly align to linguistic nonterminals. More detailed results, including preterminal alignments to part-of-speech tags,19 are shown in appendix A.4. 18While left as future work, it is possible to use the compound PCFG itself as an inference network. Also note that the F1 scores for the URNNGs in Table 3 are optimistic since we selected the best-performing runs of the original models based on validation F1 to parse the training set. 19As a POS induction system, the many-to-one performance of the compound PCFG using the preterminals is 68.0. A similarly-parameterized compound HMM with 60 hidden states (an HMM is a particularly type of PCFG) with 60 states obtains 63.2. This is still quite a bit lower than the state-of-the-art (Tran et al., 2016; He et al., 2018; Stratos, 2376 he retired as senior vice president finance and administration and chief financial officer of the company oct. N kenneth j. ⟨unk⟩who was named president of this thrift holding company in august resigned citing personal reasons the former president and chief executive eric w. ⟨unk⟩resigned in june ⟨unk⟩’s president and chief executive officer john ⟨unk⟩said the loss stems from several factors mr. ⟨unk⟩is executive vice president and chief financial officer of ⟨unk⟩and will continue in those roles charles j. lawson jr. N who had been acting chief executive since june N will continue as chairman ⟨unk⟩corp. received an N million army contract for helicopter engines boeing co. received a N million air force contract for developing cable systems for the ⟨unk⟩missile general dynamics corp. received a N million air force contract for ⟨unk⟩training sets grumman corp. received an N million navy contract to upgrade aircraft electronics thomson missile products with about half british aerospace ’s annual revenue include the ⟨unk⟩⟨unk⟩missile family already british aerospace and french ⟨unk⟩⟨unk⟩⟨unk⟩on a british missile contract and on an air-traffic control radar system meanwhile during the the s&p trading halt s&p futures sell orders began ⟨unk⟩up while stocks in new york kept falling sharply but the ⟨unk⟩of s&p futures sell orders weighed on the market and the link with stocks began to fray again on friday some market makers were selling again traders said futures traders say the s&p was ⟨unk⟩that the dow could fall as much as N points meanwhile two initial public offerings ⟨unk⟩the ⟨unk⟩market in their ⟨unk⟩day of national over-the-counter trading friday traders said most of their major institutional investors on the other hand sat tight Table 4: For each query sentence (bold), we show the 5 nearest neighbors based on cosine similarity, where we take the representation for each sentence to be the mean of the variational posterior. We next analyze the continuous latent space. Table 4 shows nearest neighbors of some sentences using the mean of the variational posterior as the continuous representation of each sentence. We qualitatively observe that the latent space seems to capture topical information. We are also interested in the variation in the leaves due to z when the variation due to the tree structure is held constant. To investigate this, we use the parsed dataset to obtain pairs of the form (µφ(x(n)), t(n) j ), where t(n) j is the j-th subtree of the (approximate) MAP tree t(n) for the n-th sentence. Therefore each mean vector µφ(x(n)) is associated with |x(n)| −1 subtrees, where |x(n)| is the sentence length. Our definition of subtree here ignores terminals, and thus each subtree is associated with many mean vectors. For a frequently occurring subtree, we perform PCA on the set of mean vectors that are associated with the subtree to obtain the top principal component. We then show the constituents that had the 5 most positive/negative values for this top principal component in Table 5. For example, a particularly common subtree—associated with 180 unique constituents—is given by (NT-04 (T-13 w1) (NT-12 (NT-20 (NT-20 (NT-07 (T-05 w2) (T-45 w3)) (T-35 w4)) (T-40 w5)) (T-22 w6))). The top 5 constituents with the most negative/positive values are shown in the top left part of Table 5. We find that the leaves [w1, . . . , w6], which form a 6-word constituent, vary in a regular manner as z is varied. We also observe that root of this subtree (NT-04) aligns to prepositional phrases (PP) in Figure 2, and the leaves in Table 5 (top left) are indeed mostly PP. However, the 2019), though comparison is confounded by various factors such as preprocessing (e.g. we drop punctuation). A neural PCFG/HMM obtains 68.2 and 63.4 respectively. model fails to identify ((T-40 w5) (T-22 w6)) as a constituent in this case (as well as well in the bottom right example). See appendix A.5 for more examples. It is possible that the model is utilizing the subtrees to capture broad template-like structures and then using z to fill them in, similar to recent works that also train models to separate “what to say” from “how to say it” (Wiseman et al., 2018; Peng et al., 2019; Chen et al., 2019a,b). Limitations We report on some negative results as well as important limitations of our work. While distributed representations promote parameter sharing, we were unable to obtain improvements through more factorized parameterizations that promote even greater parameter sharing. In particular, for rules of the type A →BC, we tried having the output embeddings be a function of the input embeddings (e.g. uBC = g([wB; wC]) where g is an MLP), but obtained worse results. For rules of the type T →w, we tried using a character-level CNN (dos Santos and Zadrozny, 2014; Kim et al., 2016) to obtain the output word embeddings uw (Jozefowicz et al., 2016; Tran et al., 2016), but found the performance to be similar to the word-level case.20 We were also unable to obtain improvements through normalizing flows (Rezende and Mohamed, 2015; Kingma et al., 2016). However, given that we did not exhaustively explore the full space of possible parameterizations, the above modifications could eventually lead to improvements with the right setup. Relatedly, the models were quite sensitive to parameterization (e.g. it was important to use residual layers for f1, f2), grammar size, and optimization method. Finally, despite vectorized GPU im20It is also possible to take advantage of pretrained word embeddings by using them to initialize output word embeddings or directly working with continuous emission distributions (Lin et al., 2015; He et al., 2018) 2377 NT-04 NT-12 T-22 w6 NT-20 T-40 w5 NT-20 T-35 w4 NT-07 T-45 w3 T-05 w2 T-13 w1 PC of the company ’s capital structure in the company ’s divestiture program by the company ’s new board in the company ’s core businesses on the company ’s strategic plan PC + above the treasury ’s N-year note above the treasury ’s seven-year note above the treasury ’s comparable note above the treasury ’s five-year note measured the earth ’s ozone layer NT-23 NT-04 NT-12 NT-04 NT-12 T-21 w7 T-60 w6 T-13 w5 NT-06 T-41 w4 T-05 w3 T-13 w2 T-58 w1 PC purchased through the exercise of stock options circulated by a handful of major brokers higher as a percentage of total loans common with a lot of large companies surprised by the storm of sell orders PC + brought to the u.s. against her will laid for the arrest of opposition activists uncertain about the magnitude of structural damage held after the assassination of his mother hurt as a result of the violations NT-10 NT-05 NT-19 NT-04 T-43 w6 T-13 w5 NT-06 T-41 w4 T-05 w3 T-02 w2 T-55 w1 PC to terminate their contract with warner to support a coup in panama to suit the bureaucrats in brussels to thwart his bid for amr to prevent the pound from rising PC + to change our strategy of investing to offset the growth of minimills to be a lot of art to change our way of life to increase the impact of advertising NT-05 NT-19 NT-04 NT-12 T-21 w7 T-60 w6 T-13 w5 NT-06 T-22 w4 NT-20 T-40 w3 T-05 w2 T-02 w1 PC raise the minimum grant for smaller states veto a defense bill with inadequate funding avoid an imminent public or private injury field a competitive slate of congressional candidates alter a longstanding ban on such involvement PC + generate an offsetting profit by selling waves change an export loss to domestic plus expect any immediate problems with margin calls make a positive contribution to our earnings find a trading focus discouraging much participation Table 5: For each subtree, we perform PCA on the variational posterior mean vectors that are associated with that particular subtree and take the top principal component. We list the top 5 constituents that had the lowest (PC -) and highest (PC +) principal component values. plementations, training was significantly more expensive (both in terms of time and memory) than NLM-based grammar induction systems due to the O(|R||x|3) dynamic program, which makes our approach potentially difficult to scale. 6 Related Work Grammar induction has a long and rich history in natural language processing. Early work on grammar induction with pure unsupervised learning was mostly negative (Lari and Young, 1990; Carroll and Charniak, 1992; Charniak, 1993), though Pereira and Schabes (1992) reported some success on partially bracketed data. Clark (2001) and Klein and Manning (2002) were some of the first successful statistical approaches to grammar induction. In particular, the constituent-context model (CCM) of Klein and Manning (2002), which explicitly models both constituents and distituents, was the basis for much subsequent work (Klein and Manning, 2004; Huang et al., 2012; Golland et al., 2012). Other works have explored imposing inductive biases through Bayesian priors (Johnson et al., 2007; Liang et al., 2007; Wang and Blunsom, 2013), modified objectives (Smith and Eisner, 2004), and additional constraints on recursion depth (Noji et al., 2016; Jin et al., 2018). While the framework of specifying the structure of a grammar and learning the parameters is common, other methods exist. Bod (2006) consider a nonparametric-style approach to unsupervised parsing by using random subsets of training subtrees to parse new sentences. Seginer (2007) utilize an incremental algorithm to unsupervised parsing which makes local decisions to create constituents based on a complex set of heuristics. Ponvert et al. (2011) induce parse trees through cascaded applications of finite state models. More recently, neural network-based approaches to grammar induction have shown promising results on inducing parse trees directly from words. In particular, Shen et al. (2018, 2019) learn tree structures through gated mechanisms within hidden layers of neural language models, while Drozdov et al. (2019) combine recursive autoencoders with the inside-outside algorithm. Kim et al. (2019) train unsupervised recurrent neural network grammars with a structured inference network to induce latent trees. 7 Conclusion This work explores grammar induction with compound PCFGs, which modulate rule probabilities with per-sentence continuous latent vectors. The latent vector induces marginal dependencies beyond the traditional first-order context-free assumptions within a tree-based generative process, leading to improved performance. The collapsed amortized variational inference approach is general and can be used for generative models which admit tractable inference through partial conditioning. Learning deep generative models which exhibit such conditional Markov properties is an interesting direction for future work. Acknowledgments We thank Phil Blunsom for initial discussions, Yonatan Belinkov and Shay Cohen for helpful feedback, and Andrew Drozdov for the DIORA dataset. YK is supported by a Google Fellowship. AMR acknowledges the support of NSF 1704834, 1845664, AWS, and Oracle. 2378 References Sanjeev Arora, Nadav Cohen, and Elad Hazan. 2018. On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization. In Proceedings of ICML. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer Normalization. In Proceedings of NIPS. James K. Baker. 1979. Trainable Grammars for Speech Recognition. In Proceedings of the Spring Conference of the Acoustical Society of America. Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum Learning. In Proceedings of ICML. James O. Berger. 1985. Statistical Decision Theory and Bayesian Analysis. Springer. Rens Bod. 2006. An All-Subtrees Approach to Unsupervised Parsing. In Proceedings of ACL. Glenn Carroll and Eugene Charniak. 1992. Two Experiments on Learning Probabilistic Dependency Grammars from Corpora. In AAAI Workshop on Statistically-Based NLP Techniques. Eugene Charniak. 1993. Statistical Language Learning. MIT Press. Danqi Chen and Christopher D. Manning. 2014. A Fast and Accurate Dependency Parser using Neural Networks. In Proceedings of EMNLP. Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019a. Controllable Paraphrase Generation with a Syntactic Exemplar. In Proceedings of ACL. Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019b. A Multi-task Approach for Disentangling Syntax and Semantics in Sentence Sepresentations. In Proceedings of NAACL. Alexander Clark. 2001. Unsupervised Induction of Stochastic Context Free Grammars Using Distributional Clustering. In Proceedings of CoNLL. Shay B. Cohen. 2016. Bayesian Analysis in Natural Language Processing. Morgan and Claypool. Shay B. Cohen, Kevin Gimpel, and Noah A Smith. 2009. Logistic Normal Priors for Unsupervised Probabilistic Grammar Induction. In Proceedings of NIPS. Shay B. Cohen and Noah A Smith. 2009. Shared Logistic Normal Distributions for Soft Parameter Tying in Unsupervised Grammar Induction. In Proceedings of NAACL. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Andrew Drozdov, Patrick Verga, Mohit Yadev, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders. In Proceedings of NAACL. Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. 2019. Gradient Descent Provably Optimizes Over-parameterized Neural Networks. In Proceedings of ICLR. Greg Durrett and Dan Klein. 2015. Neural CRF Parsing. In Proceedings of ACL. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent Neural Network Grammars. In Proceedings of NAACL. Dave Golland, John DeNero, and Jakob Uszkoreit. 2012. A Feature-Rich Constituent Context Model for Grammar Induction. In Proceedings of ACL. Junxian He, Graham Neubig, and Taylor BergKirkpatrick. 2018. Unsupervised Learning of Syntactic Structure with Invertible Neural Projections. In Proceedings of EMNLP. Phu Mon Htut, Kyunghyun Cho, and Samuel R. Bowman. 2018. Grammar Induction with Neural Language Models: An Unusual Replication. In Proceedings of EMNLP. Yun Huang, Min Zhang, and Chew Lim Tan. 2012. Improved Constituent Context Model with Features. In Proceedings of PACLIC. Lifeng Jin, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2018. Unsupervised Grammar Induction with Depth-bounded PCFG. In Proceedings of TACL. Mark Johnson. 1998. PCFG Models of Linguistic Tree Representations. Computational Linguistics, 24:613–632. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Bayesian Inference for PCFGs via Markov chain Monte Carlo. In Proceedings of NAACL. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv:1602.02410. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-Aware Neural Language Models. In Proceedings of AAAI. Yoon Kim, Alexander M. Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and G´abor Melis. 2019. Unsupervised Recurrent Neural Network Grammars. In Proceedings of NAACL. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of ICLR. 2379 Diederik P. Kingma, Tim Salimans, and Max Welling. 2016. Improving Variational Inference with Autoregressive Flow. arXiv:1606.04934. Diederik P. Kingma and Max Welling. 2014. AutoEncoding Variational Bayes. In Proceedings of ICLR. Nikita Kitaev and Dan Klein. 2018. Constituency Parsing with a Self-Attentive Encoder. In Proceedings of ACL. Dan Klein and Christopher Manning. 2002. A Generative Constituent-Context Model for Improved Grammar Induction. In Proceedings of ACL. Dan Klein and Christopher Manning. 2004. Corpusbased Induction of Syntactic Structure: Models of Dependency and Constituency. In Proceedings of ACL. Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of ACL. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better. In Proceedings of ACL. Kenichi Kurihara and Taisuke Sato. 2006. Variational Bayesian Grammar Induction for Natural Language. In Proceedings of International Colloquium on Grammatical Inference. Karim Lari and Steve Young. 1990. The Estimation of Stochastic Context-Free Grammars Using the Inside-Outside Algorithm. Computer Speech and Language, 4:35–56. Bowen Li, Lili Mou, and Frank Keller. 2019. An Imitation Learning Approach to Unsupervised Parsing. In Proceedings of ACL. Percy Liang, Slav Petrov, Michael I. Jordan, and Dan Klein. 2007. The Infinite PCFG using Hierarchical Dirichlet Processes. In Proceedings of EMNLP. Chu-Cheng Lin, Waleed Ammar, Chris Dyer, , and Lori Levin. 2015. Unsupervised POS Induction with Word Embeddings. In Proceedings of NAACL. Yang Liu, Matt Gardner, and Mirella Lapata. 2018. Structured Alignment Networks for Matching Sentences. In Proceedings of EMNLP. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330. Rebecca Marvin and Tal Linzen. 2018. Targeted Syntactic Evaluation of Language Models. In Proceedings of EMNLP. Bernard Merialdo. 1994. Tagging English Text with a Probabilistic Model. Computational Linguistics, 20(2):155–171. Hiroshi Noji, Yusuke Miyao, and Mark Johnson. 2016. Using Left-corner Parsing to Encode Universal Structural Constraints in Grammar Induction. In Proceedings of EMNLP. Ankur P. Parikh, Shay B. Cohen, and Eric P. Xing. 2014. Spectral Unsupervised Parsing with Additive Tree Metrics. In Proceedings of ACL. Hao Peng, Ankur P. Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. 2019. Text Generation with Exemplar-based Adaptive Decoding. In Proceedings of NAACL. Fernando Pereira and Yves Schabes. 1992. InsideOutside Reestimation from Partially Bracketed Corpora. In Proceedings of ACL. Elis Ponvert, Jason Baldridge, and Katrin Erk. 2011. Simpled Unsupervised Grammar Induction from Raw Text with Cascaded Finite State Methods. In Proceedings of ACL. Ofir Press and Lior Wolf. 2016. Using the Output Embedding to Improve Language Models. In Proceedings of EACL. Roi Reichart and Ari Rappoport. 2010. Improved Fully Unsupervised Parsing with Zoomed Learning. In Proceedings of EMNLP. Danilo J. Rezende and Shakir Mohamed. 2015. Variational Inference with Normalizing Flows. In Proceedings of ICML. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of ICML. Herbert Robbins. 1951. Asymptotically Subminimax Solutions of Compound Statistical Decision Problems. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, pages 131–149. Berkeley: University of California Press. Herbert Robbins. 1956. An Empirical Bayes Approach to Statistics. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, pages 157–163. Berkeley: University of California Press. C´ıcero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning Character-level Representations for Part-of-Speech Tagging. In Proceedings of ICML. Yoav Seginer. 2007. Fast Unsupervised Incremental Parsing. In Proceedings of ACL. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural Language Modeling by Jointly Learning Syntax and Lexicon. In Proceedings of ICLR. 2380 Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks. In Proceedings of ICLR. Noah A. Smith and Jason Eisner. 2004. Annealing Techniques for Unsupervised Statistical Language Learning. In Proceedings of ACL. Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009. Unsupervised Multilingual Grammar Induction. In Proceedings of ACL. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2012. Three Dependency-and-Boundary Models for Grammar Induction. In Proceedings of EMNLP-CoNLL. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2013. Breaking Out of Local Optima with Count Transforms and Model Recombination: A Study in Grammar Induction. In Proceedings of EMNLP. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A Minimal Span-Based Neural Constituency Parser. In Proceedings of ACL. Karl Stratos. 2019. Mutual Information Maximization for Simple and Accurate Part-of-Speech Induction. In Proceedings of NAACL. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In Proceedings of ACL. Ke Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, and Kevin Knight. 2016. Unsupervised Neural Hidden Markov Models. In Proceedings of the Workshop on Structured Prediction for NLP. Pengyu Wang and Phil Blunsom. 2013. Collapsed Variational Bayesian Inference for PCFGs. In Proceedings of CoNLL. Wenhui Wang and Baobao Chang. 2016. Graph-based Dependency Parsing with Bidirectional LSTM. In Proceedings of ACL. Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, and Roger Levy. 2019. Structural Supervision Improves Learning of Non-Local Grammatical Dependencies. In Proceedings of NAACL. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning Neural Templates for Text Generation. In Proceedings of EMNLP. Ji Xu, Daniel Hsu, and Arian Maleki. 2018. Benefits of Over-Parameterization with EM. In Proceedings of NeurIPS. Naiwen Xue, Fei Xia, Fu dong Chiou, and Marta Palmer. 2005. The Penn Chinese Treebank: Phrase Structure Annotation of a Large Corpus. Natural Language Engineering, 11:207–238. Cun-Hui Zhang. 2003. Compound Decision Theory and Empirical Bayes Methods. The Annals of Statistics, 31:379–390. Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long Short-Term Memory Over Tree Structures. In Proceedings of ICML. 2381 A Appendix A.1 Model Parameterization Neural PCFG We associate an input embedding wN for each symbol N on the left side of a rule (i.e. N ∈{S} ∪N ∪P) and run a neural network over wN to obtain the rule probabilities. Concretely, each rule type πr is parameterized as follows, πS→A = exp(u⊤ A f1(wS)) P A′∈N exp(u⊤ A′ f1(wS)), πA→BC = exp(u⊤ BC wA) P B′C′∈M exp(u⊤ B′C′ wA), πT→w = exp(u⊤ w f2(wT )) P w′∈Σ exp(u⊤ w′ f2(wT )), where M is the product space (N ∪P)×(N ∪P), and f1, f2 are MLPs with two residual layers, fi(x) = gi,1(gi,2(Wix)), gi,j(y) = ReLU(Vi,j ReLU(Ui,jy)) + y. The bias terms for the above expressions (including for the rule probabilities) are omitted for notational brevity. In Figure 1 we use the following to refer to rule probabilities of different rule types, πS = {πr | r ∈L(S)}, πN = {πr | r ∈L(A), A ∈N}, πP = {πr | r ∈L(T), T ∈P}, π = πS ∪πN ∪πP, where L(A) denotes the set of rules with A on the left hand side. Compound PCFG The compound PCFG rule probabilities πz given a latent vector z, πz,S→A = exp(u⊤ A f1([wS; z])) P A′∈N exp(u⊤ A′ f1([wS; z])), πz,A→BC = exp(u⊤ BC [wA; z]) P B′C′∈M exp(u⊤ B′C′ [wA; z]), πz,T→w = exp(u⊤ w f2([wT ; z])) P w′∈Σ exp(u⊤ w′ f2([wT ; z])). Again the bias terms are omitted for brevity, and f1, f2 are as before where the first layer’s input dimensions are appropriately changed to account for concatenation with z. A.2 Corpus/Sentence F1 by Sentence Length For completeness we show the corpus-level and sentence-level F1 broken down by sentence length in Table 6, averaged across 4 different runs of each model. A.3 Experiments with RNNGs For experiments on supervising RNNGs with induced trees, we use the parameterization and hyperparameters from Kim et al. (2019), which uses a 2-layer 650-dimensional stack LSTM (with dropout of 0.5) and a 650-dimensional tree LSTM (Tai et al., 2015; Zhu et al., 2015) as the composition function. Concretely, the generative story is as follows: first, the stack representation is used to predict the next action (SHIFT or REDUCE) via an affine transformation followed by a sigmoid. If SHIFT is chosen, we obtain a distribution over the vocabulary via another affine transformation over the stack representation followed by a softmax. Then we sample the next word from this distribution and shift the generated word onto the stack using the stack LSTM. If REDUCE is chosen, we pop the last two elements off the stack and use the tree LSTM to obtain a new representation. This new representation is shifted onto the stack via the stack LSTM. Note that this RNNG parameterization is slightly different than the original from Dyer et al. (2016), which does not ignore constituent labels and utilizes a bidirectional LSTM as the composition function instead of a tree LSTM. As our RNNG parameterization only works with binary trees, we binarize the gold trees with right binarization for the RNNG trained on gold trees (trees from the unsupervised methods explored in this paper are already binary). The RNNG also trains a discriminative parser alongside the generative model for evaluation with importance sampling. We use a CRF parser whose span score parameterization is similar similar to recent works (Wang and Chang, 2016; Stern et al., 2017; Kitaev and Klein, 2018): position embeddings are added to word embeddings, and a bidirectional LSTM with 256 hidden dimensions is run over the input representations to obtain the forward and backward hidden states. The score sij ∈R for a constituent spanning the i-th and j-th word is given by, sij = MLP([−→ h j+1 −−→ h i; ←− h i−1 −←− h j]), where the MLP has a single hidden layer with 2382 Sentence-level F1 WSJ-10 WSJ-20 WSJ-30 WSJ-40 WSJ-Full Left Branching 17.4 12.9 9.9 8.6 8.7 Right Branching 58.5 49.8 44.4 41.6 39.5 Random Trees 31.8 25.2 21.5 19.7 19.2 PRPN (tuned) 58.4 54.3 50.9 48.5 47.3 ON (tuned) 63.9 57.5 53.2 50.5 48.1 Neural PCFG 64.6 58.1 54.6 52.6 50.8 Compound PCFG 70.5 63.4 58.9 56.6 55.2 Oracle 82.1 84.1 84.2 84.3 84.3 Corpus-level F1 WSJ-10 WSJ-20 WSJ-30 WSJ-40 WSJ-Full Left Branching 16.5 11.7 8.5 7.2 6.0 Right Branching 58.9 48.3 42.5 39.4 36.1 Random Trees 31.9 23.9 20.0 18.1 16.4 PRPN (tuned) 59.3 53.6 49.7 46.9 44.5 ON (tuned) 64.7 56.3 51.5 48.3 45.6 Neural PCFG 63.5 56.8 53.1 51.0 48.7 Compound PCFG 70.6 62.0 57.1 54.6 52.4 Oracle 83.5 85.2 84.9 84.9 84.7 Table 6: Average unlabeled F1 for the various models broken down by sentence length on the PTB test set. For example WSJ-10 refers to F1 calculated on the subset of the test set where the maximum sentence length is at most 10. Scores are averaged across 4 runs of the model with different random seeds. Oracle is the performance of binarized gold trees (with right branching binarization). Top shows sentence-level F1 and bottom shows corpuslevel F1. ReLU nonlinearity followed by layer normalization (Ba et al., 2016). For experiments on fine-tuning the RNNG with the unsupervised RNNG, we take the discriminative parser (which is also pretrained alongside the RNNG on induced trees) to be the structured inference network for optimizing the evidence lower bound. We refer the reader to Kim et al. (2019) and their open source implementation21 for additional details. We also observe that as noted by Kim et al. (2019), a URNNG trained from scratch on this version of PTB without punctuation failed to outperform a right-branching baseline. The LSTM language model baseline is the same size as the stack LSTM (i.e. 2 layers, 650 hidden units, dropout of 0.5), and is therefore equivalent to an RNNG with completely right branching trees. For all models we share input/output word embeddings (Press and Wolf, 2016). Perplexity estimation for the RNNGs and the compound PCFG uses 1000 importance-weighted samples. For grammaticality judgment, we modify the publicly available dataset from Marvin and Linzen (2018)22 to only keep sentence pairs that did not have any unknown words with respect to our PTB 21https://github.com/harvardnlp/urnng 22https://github.com/BeckyMarvin/LM syneval vocabulary of 10K words. This results in 33K sentence pairs for evaluation. A.4 Nonterminal/Preterminal Alignments Figure 3 shows the part-of-speech alignments and Table 7 shows the nonterminal label alignments for the compound PCFG/neural PCFG. A.5 Subtree Analysis Table 8 lists more examples of constituents within each subtree as the top principical component is varied. Due to data sparsity, the subtree analysis is performed on the full dataset. See section 5 for more details. See section 5 for more details. 2383 Figure 3: Preterminal alignment to part-of-speech tags for the compound PCFG (top) and the neural PCFG (bottom). 2384 Label S SBAR NP VP PP ADJP ADVP Other Freq. Acc. NT-01 0.0% 0.0% 81.8% 1.1% 0.0% 5.9% 0.0% 11.2% 2.9% 13.8% NT-02 2.2% 0.9% 90.8% 1.7% 0.9% 0.0% 1.3% 2.2% 1.1% 44.0% NT-03 1.0% 0.0% 2.3% 96.8% 0.0% 0.0% 0.0% 0.0% 1.8% 37.1% NT-04 0.3% 2.2% 0.5% 2.0% 93.9% 0.2% 0.6% 0.3% 11.0% 64.9% NT-05 0.2% 0.0% 36.4% 56.9% 0.0% 0.0% 0.2% 6.2% 3.1% 57.1% NT-06 0.0% 0.0% 99.1% 0.0% 0.1% 0.0% 0.2% 0.6% 5.2% 89.0% NT-07 0.0% 0.0% 99.7% 0.0% 0.3% 0.0% 0.0% 0.0% 1.3% 59.3% NT-08 0.5% 2.2% 23.3% 35.6% 11.3% 23.6% 1.7% 1.7% 2.0% 44.3% NT-09 6.3% 5.6% 40.2% 4.3% 32.6% 1.2% 7.0% 2.8% 2.6% 52.1% NT-10 0.1% 0.1% 1.4% 58.8% 38.6% 0.0% 0.8% 0.1% 3.0% 50.5% NT-11 0.9% 0.0% 96.5% 0.9% 0.9% 0.0% 0.0% 0.9% 1.1% 42.9% NT-12 0.5% 0.2% 94.4% 2.4% 0.2% 0.1% 0.2% 2.0% 8.9% 74.9% NT-13 1.6% 0.1% 0.2% 97.7% 0.2% 0.1% 0.1% 0.1% 6.2% 46.0% NT-14 0.0% 0.0% 0.0% 98.6% 0.0% 0.0% 0.0% 1.4% 0.9% 54.1% NT-15 0.0% 0.0% 99.7% 0.0% 0.3% 0.0% 0.0% 0.0% 2.0% 76.9% NT-16 0.0% 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.3% 29.9% NT-17 96.4% 2.9% 0.0% 0.7% 0.0% 0.0% 0.0% 0.0% 1.2% 24.4% NT-18 0.3% 0.0% 88.7% 2.8% 0.3% 0.0% 0.0% 7.9% 3.0% 28.3% NT-19 3.9% 1.0% 86.6% 2.4% 2.6% 0.4% 1.3% 1.8% 4.5% 53.4% NT-20 0.0% 0.0% 99.0% 0.0% 0.0% 0.3% 0.2% 0.5% 7.4% 17.5% NT-21 94.4% 1.7% 2.0% 1.4% 0.3% 0.1% 0.0% 0.1% 6.2% 34.7% NT-22 0.1% 0.0% 98.4% 1.1% 0.1% 0.0% 0.2% 0.2% 3.5% 77.6% NT-23 0.4% 0.9% 14.0% 53.1% 8.2% 18.5% 4.3% 0.7% 2.4% 49.1% NT-24 0.0% 0.2% 1.5% 98.3% 0.0% 0.0% 0.0% 0.0% 2.3% 47.3% NT-25 0.3% 0.0% 1.4% 98.3% 0.0% 0.0% 0.0% 0.0% 2.2% 34.6% NT-26 0.4% 60.7% 18.4% 3.0% 15.4% 0.4% 0.4% 1.3% 2.1% 23.4% NT-27 0.0% 0.0% 48.7% 0.5% 0.7% 13.1% 3.2% 33.8% 2.0% 59.7% NT-28 88.2% 0.3% 3.8% 0.9% 0.1% 0.0% 0.0% 6.9% 6.7% 76.5% NT-29 0.0% 1.7% 95.8% 1.0% 0.7% 0.0% 0.0% 0.7% 1.0% 62.8% NT-30 1.6% 94.5% 0.6% 1.2% 1.2% 0.0% 0.4% 0.4% 2.1% 49.4% NT-01 0.0% 0.0% 0.0% 99.2% 0.0% 0.0% 0.0% 0.8% 2.6% 41.1% NT-02 0.0% 0.3% 0.3% 99.2% 0.0% 0.0% 0.0% 0.3% 5.3% 15.4% NT-03 88.2% 0.3% 3.6% 1.0% 0.1% 0.0% 0.0% 6.9% 7.2% 71.4% NT-04 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.5% 2.4% NT-05 0.0% 0.0% 0.0% 96.6% 0.0% 0.0% 0.0% 3.4% 5.0% 1.2% NT-06 0.0% 0.4% 0.4% 98.8% 0.0% 0.0% 0.0% 0.4% 1.2% 43.7% NT-07 0.2% 0.0% 95.3% 0.9% 0.0% 1.6% 0.1% 1.9% 2.8% 60.6% NT-08 1.0% 0.4% 95.3% 2.3% 0.4% 0.2% 0.3% 0.2% 9.4% 63.0% NT-09 0.6% 0.0% 87.4% 1.9% 0.0% 0.0% 0.0% 10.1% 1.0% 33.8% NT-10 78.3% 17.9% 3.0% 0.5% 0.0% 0.0% 0.0% 0.3% 1.9% 42.0% NT-11 0.3% 0.0% 99.0% 0.3% 0.0% 0.3% 0.0% 0.0% 0.9% 70.3% NT-12 0.0% 8.8% 76.5% 2.9% 5.9% 0.0% 0.0% 5.9% 2.0% 3.6% NT-13 0.5% 2.0% 1.0% 96.6% 0.0% 0.0% 0.0% 0.0% 1.7% 50.7% NT-14 0.0% 0.0% 99.1% 0.0% 0.0% 0.6% 0.0% 0.4% 7.7% 14.8% NT-15 2.9% 0.5% 0.4% 95.5% 0.4% 0.0% 0.0% 0.2% 4.4% 45.2% NT-16 0.4% 0.4% 17.9% 5.6% 64.1% 0.4% 6.8% 4.4% 1.4% 38.1% NT-17 0.1% 0.0% 98.2% 0.5% 0.1% 0.1% 0.1% 0.9% 9.6% 85.4% NT-18 0.1% 0.0% 95.7% 1.6% 0.0% 0.1% 0.2% 2.3% 4.7% 56.2% NT-19 0.0% 0.0% 98.9% 0.0% 0.4% 0.0% 0.0% 0.7% 1.3% 72.6% NT-20 2.0% 22.7% 3.0% 4.8% 63.9% 0.6% 2.3% 0.6% 6.8% 59.0% NT-21 0.0% 0.0% 14.3% 42.9% 0.0% 0.0% 42.9% 0.0% 2.2% 0.7% NT-22 1.4% 0.0% 11.0% 86.3% 0.0% 0.0% 0.0% 1.4% 1.0% 15.2% NT-23 0.1% 0.0% 58.3% 0.8% 0.4% 5.0% 1.7% 33.7% 2.8% 62.7% NT-24 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.6% 70.2% NT-25 2.2% 0.0% 76.1% 4.3% 0.0% 2.2% 0.0% 15.2% 0.4% 23.5% NT-26 0.0% 0.0% 2.3% 94.2% 3.5% 0.0% 0.0% 0.0% 0.8% 24.0% NT-27 96.6% 0.2% 1.5% 1.1% 0.3% 0.2% 0.0% 0.2% 4.3% 32.2% NT-28 1.2% 3.7% 1.5% 5.8% 85.7% 0.9% 0.9% 0.3% 7.6% 64.9% NT-29 3.0% 82.0% 1.5% 13.5% 0.0% 0.0% 0.0% 0.0% 0.6% 45.4% NT-30 0.0% 0.0% 1.0% 60.2% 19.4% 1.9% 4.9% 12.6% 2.1% 10.4% Gold 15.0% 4.8% 38.5% 21.7% 14.6% 1.7% 0.8% 2.9% Table 7: Analysis of label alignment for nonterminals in the compound PCFG (top) and the neural PCFG (bottom). Label alignment is the proportion of correctly-predicted constistuents that correspond to a particular gold label. We also show the predicited constituent frequency and accuracy (i.e. precision) on the right. Bottom line shows the frequency in the gold trees. 2385 (NT-13 (T-12 w1) (NT-25 (T-39 w2) (T-58 w3))) would be irresponsible has been growing could be delayed ’ve been neglected can be held had been made can be proven had been canceled could be used have been wary (NT-04 (T-13 w1) (NT-12 (T-60 w2) (NT-18 (T-60 w3) (T-21 w4)))) of federally subsidized loans in fairly thin trading of criminal racketeering charges in quiet expiration trading for individual retirement accounts in big technology stocks without prior congressional approval from small price discrepancies between the two concerns by futures-related program buying (NT-04 (T-13 w1) (NT-12 (T-05 w2) (NT-01 (T-18 w3) (T-25 w4)))) by the supreme court in a stock-index arbitrage of the bankruptcy code as a hedging tool to the bankruptcy court of the bond market in a foreign court leaving the stock market for the supreme court after the new york (NT-12 (NT-20 (NT-20 (T-05 w1) (T-40 w2)) (T-40 w3)) (T-22 w4)) a syrian troop pullout the frankfurt stock exchange a conventional soviet attack the late sell programs the house-passed capital-gains provision a great buying opportunity the official creditors committee the most active stocks a syrian troop withdrawal a major brokerage firm (NT-21 (NT-22 (NT-20 (T-05 w1) (T-40 w2)) (T-22 w3)) (NT-13 (T-30 w4) (T-58 w5))) the frankfurt market was mixed the gramm-rudman targets are met the u.s. unit edged lower a private meeting is scheduled a news release was prepared the key assumption is valid the stock market closed wednesday the budget scorekeeping is completed the stock market remains fragile the tax bill is enacted (NT-03 (T-07 w1) (NT-19 (NT-20 (NT-20 (T-05 w2) (T-40 w3)) (T-40 w4)) (T-22 w5))) have a high default risk rejected a reagan administration plan have a lower default risk approved a short-term spending bill has a strong practical aspect has an emergency relief program have a good strong credit writes the hud spending bill have one big marketing edge adopted the underlying transportation measure (NT-13 (T-12 w1) (NT-25 (T-39 w2) (NT-23 (T-58 w3) (NT-04 (T-13 w4) (T-43 w5))))) has been operating in paris will be used for expansion has been taken in colombia might be room for flexibility has been vacant since july may be built in britain have been dismal for years will be supported by advertising has been improving since then could be used as weapons (NT-04 (T-13 w1) (NT-12 (NT-06 (NT-20 (T-05 w2) (T-40 w3)) (T-22 w4)) (NT-04 (T-13 w5) (NT-12 (T-18 w6) (T-53 w7))))) for a health center in south carolina with an opposite trade in stock-index futures by a federal jury in new york from the recent volatility in financial markets of the appeals court in new york of another steep plunge in stock prices of the further thaw in u.s.-soviet relations over the past decade as pension funds of the service corps of retired executives by a modest recovery in share prices (NT-10 (T-55 w1) (NT-05 (T-02 w2) (NT-19 (NT-06 (T-05 w3) (T-41 w4)) (NT-04 (T-13 w5) (NT-12 (T-60 w6) (T-21 w7)))))) to integrate the products into their operations to defend the company in such proceedings to offset the problems at radio shack to dismiss an indictment against her claiming to purchase one share of common stock to death some N of his troops to tighten their hold on their business to drop their inquiry into his activities to use the microprocessor in future products to block the maneuver on procedural grounds (NT-13 (T-12 w1) (NT-25 (T-39 w2) (NT-23 (T-58 w3) (NT-04 (T-13 w4) (NT-12 (NT-20 (T-05 w5) (T-40 w6)) (T-22 w7)))))) has been mentioned as a takeover candidate would be run by the joint chiefs has been stuck in a trading range would be made into a separate bill had left announced to the trading mob would be included in the final bill only become active during the closing minutes would be costly given the financial arrangement will get settled in the short term would be restricted by a new bill (NT-10 (T-55 w) (NT-05 (T-02 w1) (NT-19 (NT-06 (T-05 w2) (T-41 w3)) (NT-04 (T-13 w4) (NT-12 (T-60 w5) (NT-18 (T-18 w6) (T-53 w7))))))) to supply that country with other defense systems to enjoy a loyalty among junk bond investors to transfer its skill at designing military equipment to transfer their business to other clearing firms to improve the availability of quality legal service to soften the blow of declining stock prices to unveil a family of high-end personal computers to keep a lid on short-term interest rates to arrange an acceleration of planned tariff cuts to urge the fed toward lower interest rates (NT-21 (NT-22 (T-60 w1) (NT-18 (T-60 w2) (T-21 w3))) (NT-13 (T-07 w4) (NT-02 (NT-27 (T-47 w5) (T-50 w6)) (NT-10 (T-55 w7) (NT-05 (T-47 w8) (T-50 w9)))))) unconsolidated pretax profit increased N % to N billion amex short interest climbed N % to N shares its total revenue rose N % to N billion its pretax profit rose N % to N million total operating revenue grew N % to N billion its pretax profit rose N % to N billion its group sales rose N % to N billion fiscal first-half sales slipped N % to N million total operating expenses increased N % to N billion total operating expenses increased N % to N billion Table 8: For each subtree (shown at the top of each set of examples), we perform PCA on the variational posterior mean vectors that are associated with that particular subtree and take the top principal component. We then list the top 5 constituents that had the lowest (left) and highest (right) principal component values.
2019
228
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2386–2395 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2386 Semi-supervised Domain Adaptation for Dependency Parsing Zhenghua Li1, Xue Peng1, Min Zhang1∗, Rui Wang2, Luo Si2 1Institute of Artificial Intelligence, School of Computer Science and Technology, Soochow University 2Alibaba Group, China {zhli13, minzhang}@suda.edu.cn, [email protected] {masi.wr, luo.si}@alibaba-inc.com Abstract During the past decades, due to the lack of sufficient labeled data, most studies on crossdomain parsing focus on unsupervised domain adaptation, assuming there is no targetdomain training data. However, unsupervised approaches make limited progress so far due to the intrinsic difficulty of both domain adaptation and parsing. This paper tackles the semi-supervised domain adaptation problem for Chinese dependency parsing, based on two newly-annotated large-scale domain-specific datasets.1 We propose a simple domain embedding approach to merge the sourceand target-domain training data, which is shown to be more effective than both direct corpus concatenation and multi-task learning. In order to utilize unlabeled target-domain data, we employ the recent contextualized word representations and show that a simple fine-tuning procedure can further boost cross-domain parsing accuracy by large margins. 1 Introduction As a fundamental task in NLP, dependency parsing has attracted a lot of research interest during the past decades due to its multi-lingual applicability in capturing both syntactic and semantic information (K¨ubler et al., 2009; McDonald et al., 2013). Given an input sentence S = w0w1 . . . wn, dependency parsing constructs a tree d = {(h, m, l), 0 ≤h ≤n, 1 ≤m ≤n, l ∈ L}, as depicted in Figure 1, where (h, m, l) is a dependency from the head wh to the modifier wm ∗Corresponding author 1The two domain-specific datasets, plus another one for product comment texts, are also used in the NLPCC-2019 shared task (http://hlt.suda.edu.cn/index. php/Nlpcc-2019-shared-task) on cross-domain Chinese dependency parsing. Please note that the settings for the source-domain training data are different between this work and NLPCC-2019 shared task. $ ł Ł this with white shirt very pretty subj att adv adv obj root Figure 1: An example from the product blogs domain. The English translation is “This looks very pretty with a white shirt.” with the relation label l, and w0 is a pseudo root node. Recently, dependency parsing has achieved tremendous progress thanks to the strong capability of deep neural networks in capturing long-distance contexts (Chen and Manning, 2014; Dyer et al., 2015; Zhou et al., 2015; Andor et al., 2016; Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017; Ma et al., 2018). Furthermore, contextualized word representations learned from large-scale unlabeled texts under language model training loss (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018), are proven to be extensively helpful for many NLP tasks including dependency parsing (Che et al., 2018; Clark et al., 2018; Kitaev and Klein, 2018). However, parsing performance drops dramatically when processing texts that are different from the training data, known as the domain adaptation problem. In fact, with the surge of web data (or user generated content), cross-domain parsing has become the major challenge for applying syntactic analysis in realistic NLP systems. To meet this challenge, the community has organized several shared tasks to attract more research attention (Nivre et al., 2007; Hajiˇc et al., 2009; Petrov and McDonald, 2012). 2387 Hindered by the lack of sufficient labeled data, most previous works on cross-domain parsing, including the aforementioned shared tasks, assume there is no labeled target-domain training data and thus focus on unsupervised domain adaptation. So far, approaches in this direction have made limited progress, due to the intrinsic difficulty of both domain adaptation and parsing (see discussions in Section 5). On the other hand, due to the extreme complexity and heavy cost, progress on syntactic data annotation on new-domain texts has been very slow, and only several small-scale datasets on web texts have been built, mostly as evaluation data for cross-domain parsing (Foster et al., 2011; Petrov and McDonald, 2012; Kong et al., 2014; Wang et al., 2014). To meet the above challenges, this paper presents two newly-annotated large-scale domainaware datasets (over 12K sentences), and try to tackle the task of semi-supervised domain adaptation for Chinese dependency parsing. With the access of both labeled and unlabeled targetdomain data, we propose and evaluate several simple approaches and conduct error analysis in order to investigate the following three questions: Q1: How to effectively combine the source- and target-domain labeled training data? Q2: How to utilize the target-domain unlabeled data for further improvements? Q3: Given a certain amount of labeled data, how much data are needed to annotate to reach a certain performance on a new domain? As our reviewers point out, the semi-supervised domain-adaptation scenario, tackled in this work, is less realistic than the unsupervised counterpart, due to need of labeled target-domain training data, which is usually extremely expensive. However, we believe that this work can be equally valuable and useful when there exist only dozens or hundreds of labeled targetdomain training sentences, which may be a feasible compromise for realistic applications of parsing techniques, considering that, as discussed above, purely unsupervised domain adaptation makes very limited progress. We will also release all annotated data at http: //hlt.suda.edu.cn/index.php/SUCDT and codes at https://github.com/ SUDA-LA/ACL2019-dp-cross-domain. 2 Data Annotation In this work, we choose two typical domain-aware web texts for annotation, i.e., product blogs and web fictions. This section introduces the details about the data annotation procedure. Data selection. The product blog (PB) texts are crawled from the Taobao headline website, which contains articles written by users mainly on description and comparison of different commercial products. After data cleaning and automatic word segmentation, we have collected about 340K sentences. Then, we select 10 thousand sentences with [5, 25] words for manual annotation following the active learning workflow of Jiang et al. (2018). The remaining sentences are used as unlabeled data. For web fictions, we follow the work on cross-domain word segmentation of Zhang et al. (2014), and adopt the popular novel named as “Zhuxian” (ZX, also known as “Jade dynasty”). Among their annotated 4,555 sentences, we select about 3,400 sentences with [5, 45] words for annotation. The remaining 32K sentences of ZX are used as unlabeled data in this work. Annotation guideline. After comparing several publicly available guidelines for dependency parsing including the universal dependencies (UD) (McDonald et al., 2013), we adopt the guideline released by Jiang et al. (2018) based on three considerations. First, their guideline contains 20 relations specifically designed to capture Chinese dependency syntax for texts of different sources. Second, the 70-page guideline gives very detailed illustrations with many concrete examples. Third, they have constructed a large-scale balanced corpus (BC), which is used as the sourcedomain labeled data in this work. Quality Control. We employ about 15 undergraduate students as annotators, and select 5 experienced annotators with linguistic background as the expert annotators. Each annotator is intensively trained to be familiar with the guideline. Based on our browser-based annotation platform, we apply strict double annotation to guarantee the quality of the labeled data. First, each raw sentence with automatic word segmentation is randomly assigned to two annotators. The annotation is accepted if the two submissions are the same. Otherwise, a third expert annotator decides the answer after comparing and analyzing the two submissions. Statistics and Analysis. After removing the 2388 PB ZX consensus ratio (sent) 35.88 46.20 consensus ratio (token) 69.38 79.21 OOV ratio 26.68 17.91 Table 1: Analysis of the annotated data. sentences with wrong word segmentation or incomprehensible semantics, we obtain 9,040 PB sentences and 3,249 ZX sentences. We analyze the two datasets from three aspects, as shown in Table 1. The sentence-wise consensus ratio is the percent of sentences that receive completely the same submission from two annotators, which is only 35% for PB and 46% for ZX. This means that more than a half of all sentences need to be checked by expert annotators, showing the complexity of syntactic annotation and the necessity of double annotation for quality guarantee. The token-wise consensus ratio is the percent of tokens that receive the same heads and labels from two annotators, which is still lower than 70% for PB and 80% for ZX. These consensus ratios clearly show that PB is more difficult to annotate than ZX. As user generated content, PB is much more casual and contains a lot of word ellipsis phenomena, wrongly written characters, abbreviated words, ill-grammar expressions, and so on. The OOV (out-of-vocabulary) ratio means the percent of tokens that do not occur in the sourcedomain BC data of Jiang et al. (2018). We can see that the OOV ratio is much higher in PB than ZX, which would certainly make PB more difficult to parse. 3 Approaches This section presents several semi-supervised cross-domain parsing approaches. 3.1 Base Biaffine Parser In this work, we build all the approaches over the state-of-the-art deep biaffine parser (Dozat and Manning, 2017). As a graph-based dependency parser, it employs a deep biaffine neural network to compute the scores of all dependencies, and uses viterbi decoding to find the highest-scoring tree. Figure 2 shows how to compute the score of an arc score(i ←j). First, the biaffine parser applies multi-layer ... ... ... ... Inputs xi xi+1 ... xj ... ... ... ... BiLSTM MLPD MLPH hj hi rD i rH j Biaffine score(i ←j) Figure 2: Computation of score(i ←j) in the biaffine parser. For simplicity, we only draw two-layer BiLSTMs. bidirectional sequential LSTMs (BiLSTM) to encode the input sentence. The input of the i-th word is the concatenation of word/tag embeddings, i.e., xi = ewi ⊕eti. The output vector of the top-layer BiLSTM for the i-th word is denoted as hi. It is fed into two separate MLPs to get two lower-dimensional representation vectors of the word, as a head and a dependent respectively. rH i , rD i = MLPH (hi) , MLPD (hi) (1) Finally, the score of an arc is computed via a biaffine operation. score(i ←j) =  rD i 1   T WbrH j (2) Similarly, the parser uses extra MLPs and biaffines to compute label scores score(i l←−j), Due to space limitation, we refer readers to Dozat and Manning (2017) for more details. Training loss. For each wi and its goldstandard head wj and label l, the parser adopts local cross-entropy losses. loss(i l←−j) = −log escore(i←j) P 0≤k≤n escore(i←k) −log escore(i l←−j) P l′∈L escore(i l′←−j) (3) where L is the label set. Separate losses are computed for heads selection and labeling. 2389 xi ⊕edomain ... ... BiLSTMs MLPs Biaffines Figure 3: The framework of the DOEMB approach, where domain = “src” for source-domain sentences and “tgt” for target-domain ones. 3.2 Combining Two Training Datasets In this subsection, we describe three simple approaches for combining the source- and targetdomain training datasets. (1) Direct concatenation (CONCAT). The most straightforward way is to directly merge multiple training datasets into a larger one. This method treats the source- and target-domain training datasets equally. The basic parser can be directly used with little modification. The major drawback for this method is that the model uses the same parameters for both domains, and thus is unable to learn the domain-specific features. (2) Domain embedding (DOEMB). Stymne et al. (2018) propose a treebank embedding approach to improve parsing by utilizing multiple heterogeneous treebanks (following diverse annotation guideline) for a language. Inspired by their work, we propose to concatenate each word position with an extra domain embedding to indicate which domain this training sentence comes from, as illustrated in Figure 3. In this way, we expect the model can fully utilize both training datasets, since most parameters are shared except the two domain embedding vectors, and learn to distinguish the domain-specific and general features as well. (3) Multi-task learning (MTL) aims to incorporate labeled data of multiple related tasks for improving performance (Collobert and Weston, 2008). Guo et al. (2016) first employ MTL to improve parsing performance by utilizing multiple heterogeneous treebanks and treating each treebank as a separate task. As shown in Figure 4, we make a straightforward extension to the biaffine parser to realize multi-task learning. The sourcedomain and target-domain parsing are treated as xi ... ... Shared BiLSTMs MLPs (Source) MLPs (Target) Biaffines (Source) Biaffines (Target) Figure 4: The framework of MTL. two individual tasks with shared parameters for word/tag embeddings and BiLSTMs. The main weakness of MTL is that the model cannot make full use of the source-domain labeled data, since the source-domain training data only contributes to the training of the shared parameters. The corpus weighting strategy. For all above three approaches, the target-domain labeled data would be overwhelmed by the source-domain data during training if directly combined, since there usually exists a very big gap in their scale. Therefore, we employ the simple corpus weighting strategy (Li et al., 2014) as a useful trick. Before each iteration, we randomly sample training sentences separately from the target- and sourcedomain training data in the proportion of 1 : M. Then we merge and randomly shuffle the sampled data for one-iteration training. We treat M ≥1 as a hyper-parameter tuned on the dev data. 3.3 Utilizing Unlabeled Data Besides labeled data, how to exploit unlabeled data, both target- and source-domain, has been an interesting and important direction for crossdomain parsing for a long time, as discussed in Section 5. Recently, Peters et al. (2018) introduce embeddings from language models (ELMo) to effectively utilize large amount of raw texts as a pretraining step. They use multiple BiLSTM layers as the sentence encoder and employ left-toright sequential language model losses. In this work, we propose a very simple twostep approach to apply ELMo to the cross-domain scenario. Step 1: Training ELMo on a large-scale general-domain unlabeled data. We train ELMo on the Chinese Gigaword Third Edition, consisting of about 1.2 million sentences. It takes about 7 days using 6 GPU nodes (GTX 1080Ti). 2390 Step 2: Fine-tuning ELMo on the targetdomain unlabeled data. We then fine-tune ELMo on the target-domain unlabeled data using the parameters trained in the previous step as the start point. To save computation resource, we merge all train/dev/unlabeled data of all three domains as one unlabeled dataset for fine-tuning ELMo once, and use the same fine-tuned ELMo for all three domains. For each word, the representations from the three BiLSTM layers of ELMo are averaged and used to replace the original word embeddings in the Biaffine Parser. We did not try to let the model automatically learn different weights for different layers, which may leads to slightly better performance. Since ELMo uses charLSTM to learn the first-layer word representations, we did try to expand the character dictionary with those that only occur in the target-domain unlabeled data, and randomly initialize their corresponding char embeddings before fine-tuning ELMo. However, this only produces slight and inconsistent performance gains. 4 Experiments Data. We use the balanced corpus (BC) released by Jiang et al. (2018) as the source domain, following their train/dev/test split. We use our newly annotated PB/ZX datasets as two target domains, and split each into train/dev/test, with the consideration that the dev/test datasets are made as large as possible for the sake of more reliable evaluation. We also provide target-domain unlabeled data, as discussed in Section 2. Table 2 shows the data statistics. Evaluation metrics. We use the standard labeled attachment score (LAS, percent of words that receives correct heads and labels) and unlabeled attachment score (UAS, ignoring labels). Parser settings. We implement the basic biaffine parser and the proposed approaches with PyTorch. We follow the hyperparameter settings of Dozat and Manning (2017), such as learning rate and dropout ratios. Each parser is trained for at most 1, 000 iterations, and the performance is evaluated on the dev data after each iteration for model selection. We stop the training if the peak performance does not increase in 50 consecutive iterations. BC PB ZX train 52,433 5,140 1,649 dev 998 1,300 500 test 1,995 2,600 1,100 unlabeled 326,981 32,492 Table 2: Data statistics in sentence number. Trained on BC PB ZX UAS LAS UAS LAS UAS LAS BC 82.77 77.66 68.73 61.93 69.34 61.32 PB 62.10 55.20 75.85 70.12 51.50 41.92 ZX 56.15 48.34 52.56 43.76 69.54 63.65 Table 3: Performance on dev data of models trained on a single-domain training data. 4.1 Single-domain Training Results Table 3 presents parsing accuracy on the dev data when training each parser on a single-domain training data. We can see that although PB-train is much smaller than BC-train, the PB-trained parser outperforms the BC-trained parser by about 8% on PB-dev, indicating the usefulness and importance of target-domain labeled data especially when two domains are very dissimilar. However, the gap between the ZX-trained parser and the BC-trained is only about 2% in LAS, which we believe has a two-fold reason. First, the size of ZX-train is even smaller, and is only less than one third of that of PB-train. Second, the BC corpus are from the People Daily newspaper and probably contains novel articles, which are more similar to ZX. Overall, it is clear and reasonable that the parser achieves best performance on a given domain when the training data is from the same domain. 4.2 Combining Two Training Datasets We combine the source- and target-domain training data using the three approaches described in Section 3.2. Due to the big gap between the size of the source- and target-domain training data, we employ the corpus weighting strategy to balance the effect of difference sources. Figure 5 shows the results on the dev data with different weighting factor M. The curves on both PB and ZX clearly show that corpus weighting is extensively helpful, and the performance gap between a good weight factor and a bad one can 2391 60 62 64 66 68 70 72 74 2 4 6 8 10 LAS (%) Corpus Weighting Factor M DOEMB on PB CONCAT on PB MTL on PB DOEMB on ZX CONCAT on ZX MTL on ZX Figure 5: Effect of corpus weighting on different approaches on the dev data. be large for certain target domains and methods. Specifically, for PB as the target domain, it seems sufficient to use the same weight for the source and target domains (i.e., M = 1), and choosing a proper larger M leads to less than 1% improvement. In contrast, corpus weighting is more important for the ZX domain, and leads to much better performance with a larger M. In addition to the very small size of ZX-train, another reason may be due to the large similarity between ZX and BC, as previously discussed. From another aspect, we can see that the DOEMB approach always performs best among the three approaches on both target domains, and MTL is the most ineffective in making use of the source-domain training data. Overall, the results are consistent with our discussions in Section 3.2. The key of the success of DOEMB over both CONCAT and MTL lies in the balance between merging the knowledge in both domains by sharing more parameters and distinguishing the two domains in order to learn domain-specific and general features. For each method-domain pair, we select the best corpus weighting M according to their results on the dev data. 4.3 Utilization of Unlabeled Data In this part, we enhance the most effective DOEMB approach with ELMo with the approach described in Section 3.3. Table 4 reports the results. Surprisingly, using the ELMo trained on general-domain Chinese Gigaword corpus has opposite effect on the two target domains. LAS decreases by 0.99 on PB but increases by 1.16 on ZX. We suspect the reason may be that that PB ZX UAS LAS UAS LAS DOEMB 78.97 73.93 78.64 73.87 + ELMo (Giga) 78.49 72.94 79.92 75.03 + Fine-tuning 83.08 78.37 81.48 76.51 Table 4: Performance of the DOEMB approach enhanced with ELMo on the dev data. Chinese gigaword corpus, like BC, contains many novel-related texts that are similar to ZX. In contrast, it is quite unlikely to have texts similar to PB, considering the PB texts are usually recent user-generated content. This finding is different from the in-domain parsing results, where ELMo is always helpful (Che et al., 2018; Clark et al., 2018) Further fine-tuning ELMo on target-domain unlabeled data leads to consistent and large improvement on both domains. Compared with “ELMo (Giga)”, LAS increases by 5.43 on PB and 1.48 on ZX. We believe the larger improvement on PB versus ZX is mainly due to the much larger scale of unlabeled PB data. The results demonstrate that through fine-tuning on targetdomain unlabeled data, ELMo effectively learns domain-specific knowledge, and is able to produce more reliable contextualized word representations. 4.4 Final Results On Test Data Table 5 shows the final results on the test data, which are consistent with the previous observations. First, when constrained on single-domain training data, using the target-domain data is the most effective. Second, using source-domain data as extra training data is helpful, and the DOEMB method performs the best. Third, it is extremely useful and efficient to first train ELMo on very large-scale general-purpose unlabeled data and then fine-tune it on relatively small-scale targetdomain unlabeled data. 4.5 Analysis The final performances on PB are consistently higher than those on ZX by about 2%, as shown in Table 5. We believe one major reason is PB-train is more than three times larger than ZX-train. This then raises an interesting and important question. When facing a new domain, how much data do we need to annotate to reach a certain performance given a certain amount of source-domain data? 2392 PB ZX UAS LAS UAS LAS Trained on single-domain data BC-train 67.55 61.01 68.44 59.55 PB-train 74.52 69.02 51.62 40.36 ZX-train 52.24 42.76 68.14 61.71 Trained on source- and target-domain data MTL 75.39 69.69 72.11 65.66 CONCAT 77.49 72.16 76.80 70.85 DOEMB 78.24 72.81 77.96 72.04 + ELMo 77.62 72.35 78.50 72.49 + Fine-tuning 82.05 77.16 80.44 75.11 Table 5: Final results on the test data. 68 70 72 74 76 78 0 1 2 4 8 16 32 LAS (%) Ratio of BC-train size to PB/ZX-train size PB-train 5K PB-train 1.5K ZX-train 1.5K Figure 6: The effect of the relative size of the targetdomain training data. We try to give some clues through the following analysis. Effect of the source-domain data size is shown in Figure 6. We fix the size of the target-domain data and increase the size of the source-domain data by using a random subset of BC-train. The “PB/ZX-train 1.5K” curves are based on random 1500 PB/ZX-train sentences in order to make fair comparison, and the “PB-train 5K” curve uses random 5000 PB-train sentences in order to understand the effect of larger targetdomain data. For example, “4” at the x-axis means that the size of BC-train is four times as much as that of the target-domain data. We can see that when the size of the targetdomain data is small, i.e., “PB/ZX-train 1.5K”, adding more source-domain BC-train data leads to consistent improvements. In split of the same data size, “PB-train 1.5K” and “ZX-train 1.5K” still have a large performance gap, which is probably caused by the effect of ELMo with the much larger 70 71 72 73 74 75 76 77 78 79 0 625 1250 2500 5000 LAS (%) PB-train sentence number used BC-train 50K BC-train 10K Figure 7: The effect of the size of the target-domain training data. scale of unlabeled PB data, although ZX is easier to parse as discussed in Section 2. In contrast, for the larger “PB-train 5K”, the peak LAS is obtained when 10K BC-train sentences are used, and using more BC-train data even slightly hurts performance. This shows that when the target-domain training data is large, the usefulness of the source-domain data becomes limited. Effect of the target-domain data size is shown in Figure 7. Due to the small size of ZX-train, we only experiment with PB-train. We draw a “BCtrain 10K” curve, since the previous analysis show that its combination with “PB-train 5K” already reaches peak performance. We can see that exponentially enlarging the size of the target-domain data leads to nearly linearized improvement, indicating data annotation is the most direct and effective (or maybe necessary) way for improving cross-domain parsing performance. On the other hand, we can see although the final performance is nearly the same for BC-train 50K and 10K, the 50K curve is obviously more steady and consistent, showing that it is usually a wise choice to use all available source-domain data. 5 Related Works Domain adaptation has been a crucial and challenging research topic in both NLP and ML fields. Due to the vast scope of related research, we try to give a brief (and far from complete) review on some representative approaches of high relevance with syntactic parsing. Unsupervised domain adaptation. Due to the lack of sufficient labeled data, most previous works focuses on unsupervised domain adapta2393 tion, assuming there is only labeled data for the source domain. Researchers make great effort to learn useful features from large-scale unlabeled target-domain data, which is usually much easier to collect. As a typical semi-supervised approach, self-training is shown to be very useful for cross-domain constituent parsing (McClosky et al., 2006) and dependency parsing (Yu et al., 2015). There are also many failed works on applying self-training for in-domain and crossdomain dependency parsing. Sagae and Tsujii (2007) apply co-training to the CoNLL-2007 cross-domain dependency parsing task and report positive gains (Nivre et al., 2007). In contrast, Dredze et al. (2007) experiment with many domain adaptation approaches with no success on the same datasets and suggest the major obstacle comes from the divergent annotation guideline adopted by the target-domain evaluation data. Source-domain data selection is another interesting research direction. Given a target domain, the idea is to automatically select a most relevant subset from the source-domain training data to train the parsing model, instead of using all the labeled data (Plank and van Noord, 2011; Khan et al., 2013). The multi-source domain adaptation problem assumes there are labeled datasets for multiple source domains. Given a target domain, the challenge is how to effectively combine knowledge in the source domains. McClosky et al. (2010) first raise this scenario for constituent parsing. They employ a regression model to predict crossdomain performance, and then use the values to combine parsing models independently trained on each source domain. Guo et al. (2018) employ a similar idea of mixture of experts under the neural MTL framework, and conduct experiments on sentiment classification and POS tagging tasks. They employ meta-training to learn to compute the point-to-set distance between a target-domain example and a source domain. Semi-supervised domain adaptation assumes there exist some (usually very small-scale) labeled target-domain data, which can be used to directly learn the domain-specific distributions or features. Daum´e III (2007) propose a simple yet effective feature augmentation approach that performs well on a number of sequence labeling tasks. The idea is to distinguish domain-specific and general features by making a copy of each feature for each domain plus a shared (general) pseudo domain. Finkel and Manning (2009) further propose a hierarchical Bayesian extension of this idea. As pointed by Finkel and Manning (2009), those two works can be understood as MTL under the traditional discrete-feature ML framework. Kim et al. (2017) propose a neural mixture of experts approach for cross-domain intent classification and slot tagging. Different from the unsupervised method of Guo et al. (2018), they use a small amount of target-domain labeled data to train an attention module for the computation of example-to-domain distances. In the parsing community, Flannery and Mori (2015) propose to annotate partially labeled target-domain data with active learning for cross-domain Japanese dependency parsing. Similarly, Joshi et al. (2018) annotate a few dozen partially labeled target-domain sentences with a few brackets for cross-domain constituent parsing. Both results report large improvement and show the usefulness of even small amount of target-domain annotation, showing the great potential of semi-supervised domain adaptation for parsing. 6 Conclusions This work addresses the task of semi-supervised domain adaptation for Chinese dependency parsing, based on our two newly-annotated large-scale domain-aware data, i.e., PB and ZX. We propose a simple domain embedding approach with corpus weighting to effectively combine both the sourceand target-domain training data. To utilize unlabeled target-domain data, We further propose an effective two-stage approach based on the recently proposed contextualized word representations (ELMo). Our proposed semi-supervised domain adaptation approach leads to absolute LAS improvement of 16.15% (77.16 vs. 61.01) and 15.56% (75.11 vs. 59.55) on PB/ZX-test respectively, over the non-adapted parser trained on the source BC-train. Moreover, detailed analysis shows that enlarging the target-domain labeled data is most effective in boost cross-domain parsing performance. Meanwhile, more source-domain labeled data usually leads to higher and more consistent improvement, especially when the scale of the targetdomain training data is small. 2394 Acknowledgments The authors would like to thank the anonymous reviewers for the helpful comments. We are greatly grateful to all participants in data annotation for their hard work. This work was supported by National Natural Science Foundation of China (Grant No. 61876116, 61525205, 61572338) and was also partially supported by the joint research project of Alibaba and Soochow University. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of ACL, pages 2442–2452. Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings. In Proceedings of CoNLL Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 2227–2237. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP, pages 740– 750. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of EMNLP, pages 1914–1925. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of ACL, pages 256–263. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependecy parsing. In Proceedings of ICLR. Mark Dredze, John Blitzer, Partha Pratim Talukdar, Kuzman Ganchev, Jo˜ao Graca, and Fernando Pereira. 2007. Frustratingly hard domain adaptation for dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 1051–1055. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL, pages 334– 343. Jenny Rose Finkel and Christopher D. Manning. 2009. Hierarchical bayesian domain adaptation. In Proceedings of NAACL, pages 602–610. Daniel Flannery and Shinsuke Mori. 2015. Combining active learning and partial annotation for domain adaptation of a japanese dependency parser. In Proceedings of the 14th International Conference on Parsing Technologies, pages 11–19. Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011. From news to comment: Resources and benchmarks for parsing the language of web 2.0. In Proceedings of IJCNLP, pages 893– 901. Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2016. A universal framework for inductive transfer parsing across multi-typed treebanks. In Proceedings of COLING, pages 12–22. Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of experts. In Proceedings of EMNLP, pages 4694– 4703. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of CoNLL. Xinzhou Jiang, Zhenghua Li, Bo Zhang, Min Zhang, Sheng Li, and Luo Si. 2018. Supervised treebank conversion: Data and approaches. In Proceedings of ACL, pages 2706–2716. Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a parser to distant domains using a few dozen partially annotated examples. In Proceedings of ACL, pages 1190–1199. Mohammad Khan, Markus Dickinson, and Sandra K¨ubler. 2013. Towards domain adaptation for parsing web data. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 357–364. Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017. Domain attention with an ensemble of experts. In Proceedings of ACL, pages 643–653. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. CoRR, abs/1603.04351. 2395 Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of ACL, pages 2675–2685. Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. A dependency parser for tweets. In Proceedings of EMNLP, pages 1001–1012. Sandra K¨ubler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing (Synthesis Lectures On Human Language Technologies). Morgan and Claypool Publishers. Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Ambiguity-aware ensemble training for semisupervised dependency parsing. In Proceedings of ACL, pages 457–467. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard H. Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of ACL, pages 1403–1414. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of ACL, pages 337–344. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Proceedings of NAACL-HLT, pages 28– 36. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of ACL, pages 92–97. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The coNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 915–932. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). Barbara Plank and Gertjan van Noord. 2011. Effective measures of domain similarity for parsing. In Proceedings of ACL, pages 1566–1576. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. OpenAI Technical Report. Kenji Sagae and Jun’ichi Tsujii. 2007. Dependency parsing and domain adaptation with LR models and parser ensembles. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL, pages 1044–1050. William Yang Wang, Lingpeng Kong, Kathryn Mazaitis, and William W Cohen. 2014. Dependency parsing for weibo: An efficient probabilistic logic programming approach. In Proceedings of EMNLP, pages 1152–1158. Juntao Yu, Mohab Elkaref, and Bernd Bohnet. 2015. Domain adaptation for dependency parsing via selftraining. In Proceedings of the 14th International Conference on Parsing Technologies, pages 1–10. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Type-supervised domain adaptation for joint segmentation and pos-tagging. In Proceedings of EACL, pages 588–597. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structuredprediction model for transition-based dependency parsing. In Proceedings of ACL, pages 1213–1222.
2019
229
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 229–240 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 229 Neural Relation Extraction for Knowledge Base Enrichment Bayu Distiawan Trisedya1, Gerhard Weikum2, Jianzhong Qi1, Rui Zhang1∗ 1 The University of Melbourne, Australia 2 Max Planck Institute for Informatics, Saarland Informatics Campus, Germany {btrisedya@student, jianzhong.qi@, rui.zhang@}unimelb.edu.au [email protected] Abstract We study relation extraction for knowledge base (KB) enrichment. Specifically, we aim to extract entities and their relationships from sentences in the form of triples and map the elements of the extracted triples to an existing KB in an end-to-end manner. Previous studies focus on the extraction itself and rely on Named Entity Disambiguation (NED) to map triples into the KB space. This way, NED errors may cause extraction errors that affect the overall precision and recall. To address this problem, we propose an end-to-end relation extraction model for KB enrichment based on a neural encoder-decoder model. We collect high-quality training data by distant supervision with co-reference resolution and paraphrase detection. We propose an n-gram based attention model that captures multi-word entity names in a sentence. Our model employs jointly learned word and entity embeddings to support named entity disambiguation. Finally, our model uses a modified beam search and a triple classifier to help generate high-quality triples. Our model outperforms state-of-theart baselines by 15.51% and 8.38% in terms of F1 score on two real-world datasets. 1 Introduction Knowledge bases (KBs), often in the form of knowledge graphs (KGs), have become essential resources in many tasks including Q&A systems, recommender system, and natural language generation. Large KBs such as DBpedia (Auer et al., 2007), Wikidata (Vrandecic and Kr¨otzsch, 2014) and Yago (Suchanek et al., 2007) contain millions of facts about entities, which are represented in the form of subject-predicate-object triples. However, these KBs are far from complete and mandate continuous enrichment and curation. ∗Rui Zhang is the corresponding author. Input sentence: "New York University is a private university in Manhattan." Unsupervised approach output: ⟨NYU,is,private university⟩ ⟨NYU,is private university in,Manhattan⟩ Supervised approach output: ⟨NYU, instance of, Private University⟩ ⟨NYU, located in, Manhattan⟩ Canonicalized output: ⟨Q49210, P31, Q902104⟩ ⟨Q49210, P131, Q11299⟩ Table 1: Relation extraction example. Previous studies work on embedding-based model (Nguyen et al., 2018; Wang et al., 2015) and entity alignment model (Chen et al., 2017; Sun et al., 2017; Trisedya et al., 2019) to enrich a knowledge base. Following the success of the sequence-to-sequence architecture (Bahdanau et al., 2015) for generating sentences from structured data (Marcheggiani and Perez-Beltrachini, 2018; Trisedya et al., 2018), we employ this architecture to do the opposite, which is extracting triples from a sentence. In this paper, we study how to enrich a KB by relation exaction from textual sources. Specifically, we aim to extract triples in the form of ⟨h, r, t⟩, where h is a head entity, t is a tail entity, and r is a relationship between the entities. Importantly, as KBs typically have much better coverage on entities than on relationships, we assume that h and t are existing entities in a KB, r is a predicate that falls in a predefined set of predicates we are interested in, but the relationship ⟨h, r, t⟩does not exist in the KB yet. We aim to find more relationships between h and t and add them to the KB. For example, from the first extracted triples in Table 1 we may recognize two entities "NYU" (abbreviation of New York University) and "Private University", which already exist in the KB; 230 also the predicate "instance of" is in the set of predefined predicates we are interested in, but the relationship of ⟨NYU, instance of, Private University⟩does not exist in the KB. We aim to add this relationship to our KB. This is the typical situation for KB enrichment (as opposed to constructing a KB from scratch or performing relation extraction for other purposes, such as Q&A or summarization). KB enrichment mandates that the entities and relationships of the extracted triples are canonicalized by mapping them to their proper entity and predicate IDs in a KB. Table 1 illustrates an example of triples extracted from a sentence. The entities and predicate of the first extracted triple, including NYU, instance of, and Private University, are mapped to their unique IDs Q49210, P31, and Q902104, respectively, to comply with the semantic space of the KB. Previous studies on relation extraction have employed both unsupervised and supervised approaches. Unsupervised approaches typically start with a small set of manually defined extraction patterns to detect entity names and phrases about relationships in an input text. This paradigm is known as Open Information Extraction (Open IE) (Banko et al., 2007; Corro and Gemulla, 2013; Gashteovski et al., 2017). In this line of approaches, both entities and predicates are captured in their surface forms without canonicalization. Supervised approaches train statistical and neural models for inferring the relationship between two known entities in a sentence (Mintz et al., 2009; Riedel et al., 2010, 2013; Zeng et al., 2015; Lin et al., 2016). Most of these studies employ a preprocessing step to recognize the entities. Only few studies have fully integrated the mapping of extracted triples onto uniquely identified KB entities by using logical reasoning on the existing KB to disambiguate the extracted entities (e.g., (Suchanek et al., 2009; Sa et al., 2017)). Most existing methods thus entail the need for Named Entity Disambiguation (NED) (cf. the survey by Shen et al. (2015)) as a separate processing step. In addition, the mapping of relationship phrases onto KB predicates necessitates another mapping step, typically aided by paraphrase dictionaries. This two-stage architecture is inherently prone to error propagation across its two stages: NED errors may cause extraction errors (and vice versa) that lead to inaccurate relationships being added to the KB. We aim to integrate the extraction and the canonicalization tasks by proposing an endto-end neural learning model to jointly extract triples from sentences and map them into an existing KB. Our method is based on the encoder-decoder framework (Cho et al., 2014) by treating the task as a translation of a sentence into a sequence of elements of triples. For the example in Table 1, our model aims to translate "New York University is a private university in Manhattan" into a sequence of IDs "Q49210 P31 Q902104 Q49210 P131 Q11299", from which we can derive two triples to be added to the KB. A standard encoder-decoder model with attention (Bahdanau et al., 2015) is, however, unable to capture the multi-word entity names and verbal or noun phrases that denote predicates. To address this problem, we propose a novel form of n-gram based attention that computes the ngram combination of attention weight to capture the verbal or noun phrase context that complements the word level attention of the standard attention model. Our model thus can better capture the multi-word context of entities and relationships. Our model harnesses pre-trained word and entity embeddings that are jointly learned with skip gram (Mikolov et al., 2013) and TransE (Bordes et al., 2013). The advantages of our jointly learned embeddings are twofold. First, the embeddings capture the relationship between words and entities, which is essential for named entity disambiguation. Second, the entity embeddings preserve the relationships between entities, which help to build a highly accurate classifier to filter the invalid extracted triples. To cope with the lack of fully labeled training data, we adapt distant supervision to generate aligned pairs of sentence and triple as the training data. We augment the process with co-reference resolution (Clark and Manning, 2016) and dictionary-based paraphrase detection (Ganitkevitch et al., 2013; Grycner and Weikum, 2016). The co-reference resolution helps extract sentences with implicit entity names, which enlarges the set of candidate sentences to be aligned with existing triples in a KB. The paraphrase detection helps filter sentences that do not express any relationships between entities. The main contributions of this paper are: • We propose an end-to-end model for extract231 ing and canonicalizing triples to enrich a KB. The model reduces error propagation between relation extraction and NED, which existing approaches are prone to. • We propose an n-gram based attention model to effectively map the multi-word mentions of entities and their relationships into uniquely identified entities and predicates. We propose joint learning of word and entity embeddings to capture the relationship between words and entities for named entity disambiguation. We further propose a modified beam search and a triple classifier to generate high-quality triples. • We evaluate the proposed model over two real-world datasets. We adapt distant supervision with co-reference resolution and paraphrase detection to obtain high-quality training data. The experimental results show that our model consistently outperforms a strong baseline for neural relation extraction (Lin et al., 2016) coupled with state-of-the-art NED models (Hoffart et al., 2011; Kolitsas et al., 2018). 2 Related Work 2.1 Open Information Extraction Banko et al. (2007) introduced the paradigm of Open Information Extraction (Open IE) and proposed a pipeline that consists of three stages: learner, extractor, and assessor. The learner uses dependency-parsing information to learn patterns for extraction, in an unsupervised way. The extractor generates candidate triples by identifying noun phrases as arguments and connecting phrases as predicates. The assessor assigns a probability to each candidate triple based on statistical evidence. This approach was prone to extracting incorrect, verbose and uninformative triples. Various followup studies (Fader et al., 2011; Mausam et al., 2012; Angeli et al., 2015; Mausam, 2016) improved the accuracy of Open IE, by adding handcrafted patterns or by using distant supervision. Corro and Gemulla (2013) developed ClausIE, a method that analyzes the clauses in a sentence and derives triples from this structure. Gashteovski et al. (2017) developed MinIE to advance ClausIE by making the resulting triples more concise. Stanovsky et al. (2018) proposed a supervised learner for Open IE by casting relation extraction into sequence tagging. A bi-LSTM model is trained to predict the label (entity, predicate, or other) of each token of the input. The work most related to ours is Neural Open IE (Cui et al., 2018), which proposed an encoder-decoder with attention model to extract triples. However, this work is not geared for extracting relations of canonicalized entities. Another line of studies use neural learning for semantic role labeling (He et al., 2018), but the goal here is to recognize the predicate-argument structure of a single input sentence – as opposed to extracting relations from a corpus. All of these methods generate triples where the head and tail entities and the predicate stay in their surface forms. Therefore, different names and phrases for the same entities result in multiple triples, which would pollute the KG if added this way. The only means to map triples to uniquely identified entities in a KG is by post-processing via entity linking (NED) methods (Shen et al., 2015) or by clustering with subsequent mapping (Gal´arraga et al., 2014). 2.2 Entity-aware Relation Extraction Inspired by the work of Brin (1998), state-of-theart methods employ distant supervision by leveraging seed facts from an existing KG (Mintz et al., 2009; Suchanek et al., 2009; Carlson et al., 2010). These methods learn extraction patterns from seed facts, apply the patterns to extract new fact candidates, iterate this principle, and finally use statistical inference (e.g., a classifier) for reducing the false positive rate. Some of these methods hinge on the assumption that the co-occurrence of a seed fact’s entities in the same sentence is an indicator of expressing a semantic relationship between the entities. This is a potential source of wrong labeling. Follow-up studies (Hoffmann et al., 2010; Riedel et al., 2010, 2013; Surdeanu et al., 2012) overcome this limitation by various means, including the use of relation-specific lexicons and latent factor models. Still, these methods treat entities by their surface forms and disregard their mapping to existing entities in the KG. Suchanek et al. (2009) and Sa et al. (2017) used probabilistic-logical inference to eliminate false positives, based on constraint solving or Monte Carlo sampling over probabilistic graphical models, respectively. These methods integrate entity linking (i.e., NED) into their models. However, both have high computational complexity and rely on modeling constraints and appropriate priors. Recent studies employ neural networks to learn the extraction of triples. Nguyen and Grish232 Wikidata Wikipedia article Joint learning skip-gram & TransE Word Embeddings 0.2 0.4 0.1 0.2 0.1 0.1 0.5 0.1 0.1 0.2 0.2 0.3 0.3 0.3 0.2 0.4 0.2 0.2 0.1 0.1 Entity Embeddings 0.1 0.5 0.1 0.4 0.2 0.1 0.5 0.1 0.5 0.1 0.2 0.3 0.3 0.3 0.3 0.2 0.3 0.3 0.3 0.1 Distant supervision Sentence-Triple pairs Sentence input: New York University is a private university in Manhattan. Expected output: Q49210 P31 Q902104 Q49210 P131 Q11299 Sentence input: New York Times Building is a skyscraper in Manhattan Expected output: Q192680 P131 Q11299 ... Sentence input: New York University is a private university in Manhattan. Expected output: <Q49210,P31,Q902104>; <Q387638,P161,Q40026> N-gram-based attention Encoder Decoder Triple classifier Dataset Collection Module Embedding Module Neural Relation Extraction Module Figure 1: Overview of our proposed solution. man (2015) proposed Convolution Networks with multi-sized window kernel. Zeng et al. (2015) proposed Piecewise Convolution Neural Networks (PCNN). Lin et al. (2016, 2017) improved this approach by proposing PCNN with sentence-level attention. This method performed best in experimental studies; hence we choose it as the main baseline against which we compare our approach. Follow-up studies considered further variations: Zhou et al. (2018) proposed hierarchical attention, Ji et al. (2017) incorporated entity descriptions, Miwa and Bansal (2016) incorporated syntactic features, and Sorokin and Gurevych (2017) used background knowledge for contextualization. None of these neural models is geared for KG enrichment, as the canonicalization of entities is out of their scope. 3 Proposed Model We start with the problem definition. Let G = (E, R) be an existing KG where E and R are the sets of entities and relationships (predicates) in G, respectively. We consider a sentence S = ⟨w1, w2, ..., wi⟩as the input, where wi is a token at position i in the sentence. We aim to extract a set of triples O = {o1, o2, ..., oj} from the sentence, where oj = ⟨hj, rj, tj⟩, hj, tj ∈E, and rj ∈R. Table 1 illustrates the input and target output of our problem. 3.1 Solution Framework Figure 1 illustrates the overall solution framework. Our framework consists of three components: data collection module, embedding module, and neural relation extraction module. In the data collection module (detailed in Section 3.2), we align known triples in an existing KB with sentences that contain such triples from a text corpus. The aligned pairs of sentences and triples will later be used as the training data in our neural relation extraction module. This alignment is done by distant supervision. To obtain a large number of high-quality alignments, we augment the process with a co-reference resolution to extract sentences with implicit entity names, which enlarges the set of candidate sentences to be aligned. We 233 further use dictionary based paraphrase detection to filter sentences that do not express any relationships between entities. In the embedding module (detailed in Section 3.3), we propose a joint learning of word and entity embeddings by combining skip-gram (Mikolov et al., 2013) to compute the word embeddings and TransE (Bordes et al., 2013) to compute the entity embeddings. The objective of the joint learning is to capture the similarity of words and entities that helps map the entity names into the related entity IDs. Moreover, the resulting entity embeddings are used to train a triple classifier that helps filter invalid triples generated by our neural relation extraction model. In the neural relation extraction module (detailed in Section 3.4), we propose an n-gram based attention model by expanding the attention mechanism to the n-gram token of a sentence. The ngram attention computes the n-gram combination of attention weight to capture the verbal or noun phrase context that complements the word level attention of the standard attention model. This expansion helps our model to better capture the multi-word context of entities and relationships. The output of the encoder-decoder model is a sequence of the entity and predicate IDs where every three IDs indicate a triple. To generate highquality triples, we propose two strategies. The first strategy uses a modified beam search that computes the lexical similarity of the extracted entities with the surface form of entity names in the input sentence to ensure the correct entity prediction. The second strategy uses a triple classifier that is trained using the entity embeddings from the joint learning to filter the invalid triples. The triple generation process is detailed in Section 3.5 3.2 Dataset Collection We aim to extract triples from a sentence for KB enrichment by proposing a supervised relation extraction model. To train such a model, we need a large volume of fully labeled training data in the form of sentence-triple pairs. Following Sorokin and Gurevych (2017), we use distant supervision (Mintz et al., 2009) to align sentences in Wikipedia1 with triples in Wikidata2 (Vrandecic and Kr¨otzsch, 2014). 1https://dumps.wikimedia.org/enwiki/latest/enwikilatest-pages-articles.xml.bz2 2https://dumps.wikimedia.org/wikidatawiki/entities/latestall.ttl.gz We map an entity mention in a sentence to the corresponding entity entry (i.e., Wikidata ID) in Wikidata via the hyperlink associated to the entity mention, which is recorded in Wikidata as the url property of the entity entry. Each pair may contain one sentence and multiple triples. We sort the order of the triples based on the order of the predicate paraphrases that indicate the relationships between entities in the sentence. We collect sentence-triple pairs by extracting sentences that contain both head and tail entities of Wikidata triples. To generate high-quality sentence-triple pairs, we propose two additional steps: (1) extracting sentences that contain implicit entity names using co-reference resolution, and (2) filtering sentences that do not express any relationships using paraphrase detection. We detail these steps below. Prior to aligning the sentences with triples, in Step (1), we find the implicit entity names to increase the number of candidate sentences to be aligned. We apply co-reference resolution (Clark and Manning, 2016) to each paragraph in a Wikipedia article and replace the extracted co-references with the proper entity name. We observe that the first sentence of a paragraph in a Wikipedia article may contain a pronoun that refers to the main entity. For example, there is a paragraph in the Barack Obama article that starts with a sentence "He was reelected to the Illinois Senate in 1998". This may cause the standard co-reference resolution to miss the implicit entity names for the rest of the paragraph. To address this problem, we heuristically replace the pronouns in the first sentence of a paragraph if the main entity name of the Wikipedia page is not mentioned. For the sentence in the previous example, we replace "He" with "Barack Obama". The intuition is that a Wikipedia article contains content of a single entity of interest, and that the pronouns mentioned in the first sentence of a paragraph mostly relate to the main entity. In Step (2), we use a dictionary based paraphrase detection to capture relationships between entities in a sentence. First, we create a dictionary by populating predicate paraphrases from three sources including PATTY (Nakashole et al., 2012), POLY (Grycner and Weikum, 2016), and PPDB (Ganitkevitch et al., 2013) that yield 540 predicates and 24, 013 unique paraphrases. For example, predicate paraphrases for the relation234 #pairs #triples #entities #predicates All (WIKI) 255,654 330,005 279,888 158 Train+val 225,869 291,352 249,272 157 Test (WIKI) 29,785 38,653 38,690 109 Test (GEO) 1,000 1,095 124 11 Table 2: Statistics of the dataset. ship "place of birth" are {born in, was born in, ...}. Then, we use this dictionary to filter sentences that do not express any relationships between entities. We use exact string matching to find verbal or noun phrases in a sentence which is a paraphrases of a predicate of a triple. For example, for the triple ⟨Barack Obama, place of birth, Honolulu⟩, the sentence "Barack Obama was born in 1961 in Honolulu, Hawaii" will be retained while the sentence "Barack Obama visited Honolulu in 2010" will be removed (the sentence may be retained if there is another valid triple ⟨Barack Obama, visited, Honolulu⟩). This helps filter noises for the sentence-triple alignment. The collected dataset contains 255,654 sentence-triple pairs. For each pair, the maximum number of triples is four (i.e., a sentence can produce at most four triples). We split the dataset into train set (80%), dev set (10%) and test set (10%) (we call it the WIKI test dataset). For stress testing (to test the proposed model on a different style of text than the training data), we also collect another test dataset outside Wikipedia. We apply the same procedure to the user reviews of a travel website. First, we collect user reviews on 100 popular landmarks in Australia. Then, we apply the adapted distant supervision to the reviews and collect 1,000 sentence-triple pairs (we call it the GEO test dataset). Table 2 summarizes the statistics of our datasets. 3.3 Joint Learning of Word and Entity Embeddings Our relation extraction model is based on the encoder-decoder framework which has been widely used in Neural Machine Translation to translate text from one language to another. In our setup, we aim to translate a sentence into triples, and hence the vocabulary of the source input is a set of English words while the vocabulary of the target output is a set of entity and predicate IDs in an existing KG. To compute the embeddings of the source and target vocabularies, we propose a joint learning of word and entity embeddings that is effective to capture the similarity between words and entities for named entity disambiguation (Yamada et al., 2016). Note that our method differs from that of Yamada et al. (2016). We use joint learning by combining skip-gram (Mikolov et al., 2013) to compute the word embeddings and TransE (Bordes et al., 2013) to compute the entity embeddings (including the relationship embeddings), while Yamada et al. (2016) use Wikipedia Link-based Measure (WLM) (Milne and Witten, 2008) that does not consider the relationship embeddings. Our model learns the entity embeddings by minimizing a margin-based objective function JE: JE = X tr∈Tr X t′r∈T ′r max 0,  γ + f(tr) −f(t′ r)  (1) Tr = {⟨h, r, t⟩|⟨h, r, t⟩∈G} (2) Tr ′ =  h′, r, t | h′ ∈E ∪  h, r, t′ | t′ ∈E (3) f(tr) = ∥h + r −t∥ (4) Here, ∥x∥is the L1-Norm of vector x, γ is a margin hyperparameter, Tr is the set of valid relationship triples from a KG G, and T ′ r is the set of corrupted relationship triples (recall that E is the set of entities in G). The corrupted triples are used as negative samples, which are created by replacing the head or tail entity of a valid triple in Tr with a random entity. We use all triples in Wikidata except those which belong to the testing data to compute the entity embeddings. To establish the interaction between the entity and word embeddings, we follow the Anchor Context Model proposed by Yamada et al. (2016). First, we generate a text corpus by combining the original text and the modified anchor text of Wikipedia. This is done by replacing the entity names in a sentence with the related entity or predicate IDs. For example, the sentence "New York University is a private university in Manhattan" is modified into "Q49210 is a Q902104 in Q11299". Then, we use the skip-gram method to compute the word embeddings from the generated corpus (the entity IDs in the modified anchor text are treated as words in the skip-gram model). Given a sequence of n words [w1, w2, ..., wn], The model learns the word embeddings, by minimizing the following objective function JW : JW = 1 T n X t=1 X −c≤j≤c,j̸=0 log P (wt+j|wt) (5) 235 P (wt+j|wt) = exp(v ′ wt+j ⊤vwt) PW i=1(v ′ i ⊤vwt) (6) where c is the size of the context window, wt denotes the target word, and wt+j is the context word; vw and v ′ w are the input and output vector representations of word w, and W is the vocabulary size. The overall objective function of the joint learning of word and entity embeddings is: J = JE + JW (7) 3.4 N-gram Based Attention Model Our proposed relation extraction model integrates the extraction and canonicalization tasks for KB enrichment in an end-to-end manner. To build such a model, we employ an encoder-decoder model (Cho et al., 2014) to translate a sentence into a sequence of triples. The encoder encodes a sentence into a vector that is used by the decoder as a context to generate a sequence of triples. Because we treat the input and output as a sequence, We use the LSTM networks (Hochreiter and Schmidhuber, 1997) in the encoder and the decoder. The encoder-decoder with attention model (Bahdanau et al., 2015) has been used in machine translation. However, in the relation extraction task, the attention model cannot capture the multiword entity names. In our preliminary investigation, we found that the attention model yields misalignment between the word and the entity. The above problem is due to the same words in the names of different entities (e.g., the word University in different university names such as New York University, Washington University, etc.). During training, the model pays more attention to the word University to differentiate different types of entities of a similar name, e.g., New York University, New York Times Building, or New York Life Building, but not the same types of entities of different names (e.g., New York University and Washington University). This may cause errors in entity alignment, especially when predicting the ID of an entity that is not in the training data. Even though we add ⟨Entity-name, Entity-ID⟩ pairs as training data (see the Training section), the misalignments still take place. We address the above problem by proposing an n-gram based attention model. This model computes the attention of all possible n-grams of the sentence input. The attention weights are computed over the n-gram combinations of the word embeddings, and hence the context vector for the decoder is computed as follows. cd t =  he; |N| X n=1 Wn   |Xn| X i=1 αn i xn i     (8) αn i = exp(he⊤Vnxn i ) P|Xn| j=1 exp(he⊤Vnxn j ) (9) Here, cd t is the context vector of the decoder at timestep t, he is the last hidden state of the encoder, the superscript n indicates the n-gram combination, x is the word embeddings of input sentence, |Xn| is the total number of n-gram token combination, N indicates the maximum value of n used in the n-gram combinations (N = 3 in our experiments), W and V are learned parameter matrices, and α is the attention weight. Training In the training phase, in addition to the sentencetriple pairs collected using distant supervision (see Section 3.2), we also add pairs of ⟨Entity-name, Entity-ID⟩of all entities in the KB to the training data, e.g., ⟨New York University, Q49210⟩. This allows the model to learn the mapping between entity names and entity IDs, especially for the unseen entities. 3.5 Triple Generation The output of the encoder-decoder model is a sequence of the entity and predicate IDs where every three tokens indicate a triple. Therefore, to extract a triple, we simply group every three tokens of the generated output. However, the greedy approach (i.e., picking the entity with the highest probability of the last softmax layer of the decoder) may lead the model to extract incorrect entities due to the similarity between entity embeddings (e.g., the embeddings of New York City and Chicago may be similar because both are cities in USA). To address this problem, we propose two strategies: re-ranking the predicted entities using a modified beam search and filtering invalid triples using a triple classifier. The modified beam search re-ranks top-k (k = 10 in our experiments) entity IDs that are predicted 236 Model WIKI GEO Precision Recall F1 Precision Recall F1 Existing Models MinIE (+AIDA) 0.3672 0.4856 0.4182 0.3574 0.3901 0.3730 MinIE (+NeuralEL) 0.3511 0.3967 0.3725 0.3644 0.3811 0.3726 ClausIE (+AIDA) 0.3617 0.4728 0.4099 0.3531 0.3951 0.3729 ClausIE (+NeuralEL) 0.3445 0.3786 0.3607 0.3563 0.3791 0.3673 CNN (+AIDA) 0.4035 0.3503 0.3750 0.3715 0.3165 0.3418 CNN (+NeuralEL) 0.3689 0.3521 0.3603 0.3781 0.3005 0.3349 EncoderDecoder Models Single Attention 0.4591 0.3836 0.4180 0.4010 0.3912 0.3960 Single Attention (+pre-trained) 0.4725 0.4053 0.4363 0.4314 0.4311 0.4312 Single Attention (+beam) 0.6056 0.5231 0.5613 0.5869 0.4851 0.5312 Single Attention (+triple classifier) 0.7378 0.5013 0.5970 0.6704 0.5301 0.5921 Transformer 0.4628 0.3897 0.4231 0.4575 0.4620 0.4597 Transformer (+pre-trained) 0.4748 0.4091 0.4395 0.4841 0.4831 0.4836 Transformer (+beam) 0.5829 0.5025 0.5397 0.6181 0.6161 0.6171 Transformer (+triple classifier) 0.7307 0.4866 0.5842 0.7124 0.5761 0.6370 Proposed N-gram Attention 0.7014 0.6432 0.6710 0.6029 0.6033 0.6031 N-gram Attention (+pre-trained) 0.7157 0.6634 0.6886 0.6581 0.6631 0.6606 N-gram Attention (+beam) 0.7424 0.6845 0.7123 0.6816 0.6861 0.6838 N-gram Attention (+triple classifier) 0.8471 0.6762 0.7521 0.7705 0.6771 0.7208 Table 3: Experiments result. by the decoder by computing the edit distance between the entity names (obtained from the KB) and every n-gram token of the input sentence. The intuition is that the entity name should be mentioned in the sentence so that the entity with the highest similarity will be chosen as the output. Our triple classifier is trained with entity embeddings from the joint learning (see Section 3.3). Triple classification is one of the metrics to evaluate the quality of entity embeddings (Socher et al., 2013). We build a classifier to determine the validity of a triple ⟨h, r, t⟩. We train a binary classifier based on the plausibility score (h + r −t) (the score to compute the entity embeddings). We create negative samples by corrupting the valid triples (i.e., replacing the head or tail entity by a random entity). The triple classifier is effective to filter invalid triple such as ⟨New York University, capital of, Manhattan⟩. 4 Experiments We evaluate our model on two real datasets including WIKI and GEO test datasets (see Section 3.2). We use precision, recall, and F1 score as the evaluation metrics. 4.1 Hyperparameters We use grid search to find the best hyperparameters for the networks. We use 512 hidden units for both the encoder and the decoder. We use 64 dimensions of pre-trained word and entity embeddings (see Section 3.3). We use a 0.5 dropout rate for regularization on both the encoder and the decoder. We use Adam (Kingma and Ba, 2015) with a learning rate of 0.0002. 4.2 Models We compare our proposed model3 with three existing models including CNN (the state-of-theart supervised approach by Lin et al. (2016)), MiniE (the state-of-the-art unsupervised approach by Gashteovski et al. (2017)), and ClausIE by Corro and Gemulla (2013). To map the extracted entities by these models, we use two state-of-theart NED systems including AIDA (Hoffart et al., 2011) and NeuralEL (Kolitsas et al., 2018). The precision (tested on our test dataset) of AIDA and NeuralEL are 70% and 61% respectively. To map the extracted predicates (relationships) of the unsupervised approaches output, we use the dictionary based paraphrase detection. We use the same dictionary that is used to collect the dataset (i.e., the combination of three paraphrase dictionaries including PATTY (Nakashole et al., 2012), POLY (Grycner and Weikum, 2016), and PPDB (Ganitkevitch et al., 2013)). We replace the extracted predicate with the correct predicate ID if one of the paraphrases of the correct predicate (i.e., the gold standard) appear in the extracted predicate. Otherwise, we replace the extracted predicate with "NA" to indicate an unrecognized predicate. We also compare our N-gram Attention model with two encoder-decoder based models including the Single Attention model (Bahdanau et al., 2015) and Transformer model (Vaswani et al., 2017). 3The code and the dataset are made available at http://www.ruizhang.info/GKB/gkb.htm 237 4.3 Results Table 3 shows that the end-to-end models outperform the existing model. In particular, our proposed n-gram attention model achieves the best results in terms of precision, recall, and F1 score. Our proposed model outperforms the best existing model (MinIE) by 33.39% and 34.78% in terms of F1 score on the WIKI and GEO test dataset respectively. These results are expected since the existing models are affected by the error propagation of the NED. As expected, the combination of the existing models with AIDA achieves higher F1 scores than the combination with NeuralEL as AIDA achieves a higher precision than NeuralEL. To further show the effect of error propagation, we set up an experiment without the canonicalization task (i.e., the objective is predicting a relationship between known entities). We remove the NED pre-processing step by allowing the CNN model to access the correct entities. Meanwhile, we provide the correct entities to the decoder of our proposed model. In this setup, our proposed model achieves 86.34% and 79.11%, while CNN achieves 81.92% and 75.82% in precision over the WIKI and GEO test datasets, respectively. Our proposed n-gram attention model outperforms the end-to-end models by 15.51% and 8.38% in terms of F1 score on the WIKI and GEO test datasets, respectively. The Transformer model also only yields similar performance to that of the Single Attention model, which is worse than ours. These results indicate that our model captures multi-word entity name (in both datasets, 82.9% of the entities have multi-word entity name) in the input sentence better than the other models. Table 3 also shows that the pre-trained embeddings improve the performance of the model in all measures. Moreover, the pre-trained embeddings help the model to converge faster. In our experiments, the models that use the pre-trained embeddings converge in 20 epochs on average, while the models that do not use the pre-trained embeddings converge in 30 −40 epochs. Our triple classifier combined with the modified beam search boost the performance of the model. The modified beam search provides a high recall by extracting the correct entities based on the surface form in the input sentence while the triple classifier provides a high precision by filtering the invalid triples. Discussion We further perform manual error analysis. We found that the incorrect output of our model is caused by the same entity name of two different entities (e.g., the name of Michael Jordan that refers to the American basketball player or the English footballer). The modified beam search cannot disambiguate those entities as it only considers the lexical similarity. We consider using context-based similarity as future work. 5 Conclusions We proposed an end-to-end relation extraction model for KB enrichment that integrates the extraction and canonicalization tasks. Our model thus reduces the error propagation between relation extraction and NED that existing approaches are prone to. To obtain high-quality training data, we adapt distant supervision and augment it with co-reference resolution and paraphrase detection. We propose an n-gram based attention model that better captures the multi-word entity names in a sentence. Moreover, we propose a modified beam search and a triple classification that helps the model to generate high-quality triples. Experimental results show that our proposed model outperforms the existing models by 33.39% and 34.78% in terms of F1 score on the WIKI and GEO test dataset respectively. These results confirm that our model reduces the error propagation between NED and relation extraction. Our proposed n-gram attention model outperforms the other encoder-decoder models by 15.51% and 8.38% in terms of F1 score on the two real-world datasets. These results confirm that our model better captures the multi-word entity names in a sentence. In the future, we plan to explore contextbased similarity to complement the lexical similarity to improve the overall performance. Acknowledgments Bayu Distiawan Trisedya is supported by the Indonesian Endowment Fund for Education (LPDP). This work is done while Bayu Distiawan Trisedya is visiting the Max Planck Institute for Informatics. This work is supported by Australian Research Council (ARC) Discovery Project DP180102050, Google Faculty Research Award, and the National Science Foundation of China (Project No. 61872070 and No. 61402155). 238 References Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of Association for Computational Linguistics, pages 344–354. S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In Proceedings of International Semantic Web Conference, pages 722–735. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of International Joint Conference on Artifical intelligence, pages 2670–2676. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proceedings of International Conference on Neural Information Processing Systems, pages 2787–2795. Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In Proceedings of The World Wide Web and Databases International Workshop, pages 172–183. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of AAAI Conference on Artificial Intelligence, pages 1306– 1313. Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of International Joint Conference on Artificial Intelligence, pages 1511–1517. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of Empirical Methods in Natural Language Processing, pages 1724–1734. Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of Empirical Methods in Natural Language Processing, pages 2256–2262. Luciano Del Corro and Rainer Gemulla. 2013. Clausie: clause-based open information extraction. In Proceedings of International Conference on World Wide Web, pages 355–366. Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proceedings of Association for Computational Linguistics, pages 407–413. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of Empirical Methods in Natural Language Processing, pages 1535–1545. Luis Gal´arraga, Geremy Heitz, Kevin Murphy, and Fabian M. Suchanek. 2014. Canonicalizing open knowledge bases. In Proceedings of International Conference on Information and Knowledge Management, pages 1679–1688. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758–764. Kiril Gashteovski, Rainer Gemulla, and Luciano Del Corro. 2017. Minie: Minimizing facts in open information extraction. In Proceedings of Empirical Methods in Natural Language Processing, pages 2620–2630. Adam Grycner and Gerhard Weikum. 2016. Poly: Mining relational paraphrases from multilingual sentences. In Proceedings of Empirical Methods in Natural Language Processing, pages 2183–2192. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of Association for Computational Linguistics, pages 364–369. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Johannes Hoffart et al. 2011. Robust disambiguation of named entities in text. In Proceedings of Empirical Methods in Natural Language Processing, pages 782–792. Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In Proceedings of Association for Computational Linguistics, pages 286–295. Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Proceedings of AAAI Conference on Artificial Intelligence, pages 3060–3066. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representations. 239 Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural entity linking. In Proceedings of Conference on Computational Natural Language Learning, pages 519–529. Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Neural relation extraction with multi-lingual attention. In Proceedings of Association for Computational Linguistics, volume 1, pages 34–43. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of Association for Computational Linguistics, volume 1, pages 2124–2133. Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for structured data to text generation. Proceedings of International Conference on Natural Language Generation, pages 1–9. Mausam. 2016. Open information extraction systems and downstream applications. In Proceedings of International Joint Conference on Artificial Intelligence, pages 4074–4077. Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of Empirical Methods in Natural Language Processing, pages 523–534. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of International Conference on Neural Information Processing Systems, pages 3111–3119. David Milne and Ian H. Witten. 2008. An effective, low-cost measure of semantic relatedness obtained from wikipedia links. In Proceedings of AAAI Workshop on Wikipedia and Artificial Intelligence, pages 25–30. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of Association for Computational Linguistics and International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of Association for Computational Linguistics. Ndapandula Nakashole, Gerhard Weikum, and Fabian M. Suchanek. 2012. Patty: A taxonomy of relational patterns with semantic types. In Proceedings of Empirical Methods in Natural Language Processing, pages 1135–1145. Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 2, pages 327–333. Thien Huu Nguyen and Ralph Grishman. 2015. Relation extraction: Perspective from convolutional neural networks. In Proceedings of Workshop on Vector Space Modeling for Natural Language Processing, pages 39–48. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148–163. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84. Christopher De Sa, Alexander Ratner, Christopher R´e, Jaeho Shin, Feiran Wang, Sen Wu, and Ce Zhang. 2017. Incremental knowledge base construction using deepdive. Very Large Data Bases Journal, 26(1):81–105. Wei Shen, Jianyong Wang, and Jiawei Han. 2015. Entity linking with a knowledge base: Issues, techniques, and solutions. IEEE Trans. Knowl. Data Eng., 27(2):443–460. Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Proceedings of International Conference on Neural Information Processing Systems, pages 926–934. Daniil Sorokin and Iryna Gurevych. 2017. Contextaware representations for knowledge base relation extraction. In Proceedings of Empirical Methods in Natural Language Processing, pages 1784–1789. Gabriel Stanovsky, Julian Michael, Ido Dagan, and Luke Zettlemoyer. 2018. Supervised open information extraction. In Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 885–895. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of International Conference on World Wide Web, pages 697–706. Fabian M. Suchanek, Mauro Sozio, and Gerhard Weikum. 2009. SOFIE: a self-organizing framework for information extraction. In Proceedings of International Conference on World Wide Web, pages 631–640. 240 Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attributepreserving embedding. Proceedings of International Semantic Web Conference, pages 628–644. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of Empirical Methods in Natural Language Processing, pages 455–465. Bayu Distiawan Trisedya, Jianzhong Qi, and Rui Zhang. 2019. Entity alignment between knowledge graphs using attribute embeddings. In Proceedings of AAAI Conference on Artificial Intelligence. Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. Gtr-lstm: A triple encoder for sentence generation from rdf data. In Proceedings of Association for Computational Linguistics, pages 1627–1637. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of Neural Information Processing Systems, pages 5998–6008. Denny Vrandecic and Markus Kr¨otzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM, 57(10):78–85. Quan Wang, Bin Wang, and Li Guo. 2015. Knowledge base completion using embeddings and rules. In Proceedings of International Joint Conference on Artificial Intelligence, pages 1859–1865. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of Conference on Computational Natural Language Learning, pages 250–259. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of Empirical Methods in Natural Language Processing, pages 1753–1762. Peng Zhou, Jiaming Xu, Zhenyu Qi, Hongyun Bao, Zhineng Chen, and Bo Xu. 2018. Distant supervision for relation extraction with hierarchical selective attention. Neural Networks, 108:240–247.
2019
23
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2396–2408 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2396 Head-Driven Phrase Structure Grammar Parsing on Penn Treebank Junru Zhou and Hai Zhao∗ Department of Computer Science and Engineering Key Lab of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University, Shanghai, China [email protected], [email protected] Abstract Head-driven phrase structure grammar (HPSG) enjoys a uniform formalism representing rich contextual syntactic and even semantic meanings. This paper makes the first attempt to formulate a simplified HPSG by integrating constituent and dependency formal representations into head-driven phrase structure. Then two parsing algorithms are respectively proposed for two converted tree representations, division span and joint span. As HPSG encodes both constituent and dependency structure information, the proposed HPSG parsers may be regarded as a sort of joint decoder for both types of structures and thus are evaluated in terms of extracted or converted constituent and dependency parsing trees. Our parser achieves new state-of-the-art performance for both parsing tasks on Penn Treebank (PTB) and Chinese Penn Treebank, verifying the effectiveness of joint learning constituent and dependency structures. In details, we report 95.84 F1 of constituent parsing and 97.00% UAS of dependency parsing on PTB. 1 Introduction Head-driven phrase structure grammar (HPSG) is a highly lexicalized, constraint-based grammar developed by (Pollard and Sag, 1994). As opposed to dependency grammar, HPSG is the immediate successor of generalized phrase structure grammar. HPSG divides language symbols into categories of different types, such as vocabulary, phrases, etc. Each category has different grammar letter information. The complete language symbol which is a complex type feature structure represented by ∗Corresponding author. This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100) and key projects of Natural Science Foundation of China (No. U1836222 and No. 61733011) Kim gives Sandy books VP S NNP VBZ NNP NP NNS NP NP (a) Constituent Kim gives Sandy books ROOT NNP VBZ NNP NNS (b) Dependency Sandy 2 books 3 SYNSEM|LOC|CAT HEAD 4 SUBCAT< 1 > HEAD 4 SYNSEM|LOC|CAT HEAD 4 verb[fin] SUBCAT< 1 NP, 2 NP, 3 NP > SUBCAT < > SYNSEM|LOC|CAT 1 gives Kim (=S[fin]) (=VP[fin]) C H H C C (c) HPSG Figure 1: Constituent, dependency and HPSG trees. attribute value matrices (AVMs) includes phonological, syntactic, and semantic properties, the valence of the word and interrelationship between various components of the phrase structure. Meanwhile, the constituent structure of HPSG follows the HEAD FEATURE PRINCIPLE (HFP) (Pollard and Sag, 1994): “the head value of any headed phrase is structure-shared with the HEAD value of the head daughter. The effect of the HFP is to guarantee that headed phrases really are projections of their head daughter” (p. 34). Constituent and dependency are two typical syntactic structure representation forms, which have been well studied from both linguistic and computational perspective (Chomsky, 1981; Bresnan et al., 2015). The two formalisms carrying distinguished information have each own strengths that constituent structure is better at disclosing phrasal continuity while the dependency structure is better at indicating dependency relation among words. 2397 Typical dependency treebanks are usually converted from constituent treebanks, though they may be independently annotated as well for the same languages. In reverse, constituent parsing can be accurately converted to dependencies representation by grammatical rules or machine learning methods (De Marneffe et al., 2006; Ma et al., 2010). Such convertibility shows a close relation between constituent and dependency representations, which also have a strong correlation with the HFP of HPSG as shown in Figure 1. Thus, it is possible to combine the two representation forms into a simplified HPSG not only for even better parsing but also for more linguistically rich representation. In this work, we exploit both strengths of the two representation forms and combine them into HPSG. To our best knowledge, it is first attempt to perform such a formulization1. In this paper, we explore two parsing methods for the simplified HPSG parse tree which contains both constituent and dependency syntactic information. Our simplified HPSG will be from the annotations or conversions of Penn Treebank (PTB)2 (Marcus et al., 1993). Thus the evaluation for our HPSG parser will also be done on both the annotated constituent and converted dependency parse trees, which let our HPSG parser compare to existing constituent and dependency parsers individually. Our experimental results show that our HPSG parser brings better prediction on both constituent and dependency tree structures. In addition, the empirical results show that our parser reaches new state-of-the-art for both parsing tasks. To sum up, we make the following contributions: • For the first time, we formulate a simplified HPSG by combining constituent and dependency tree structures. • We propose two novel methods to handle the simplified HPSG parsing. • Our model achieves state-of-the-art results on PTB and CTB for both constituent and dependency parsing. The rest of the paper is organized as follows: Section 2 presents the tree structure of HPSG and two span representations. Section 3 presents our 1Code and trained English models are publicly available: https://github.com/DoodleJZ/HPSG-Neural-Parser 2PTB is an English treebank, our parser will also be evaluated on Chinese Penn Treebank (CTB) which follows the similar annotation guideline as PTB. sign synsem local category head PHON list of string SUBJ list of synsem SEM semantics COMPS list of synsem MODL synsem MODR synsem nonlocal REL list of local SLASH list of local SYNSEM LOCAL NONLOC CAT HEAD Figure 2: HPSG sign from (Miyao et al., 2004). model based on self-attention architecture and the adopted parsing algorithms. Section 4 reports the experiments and results on PTB and CTB treebanks to evaluate our model. At last, we survey related work and conclude this paper respectively in Sections 5 and 6. 2 Simplified HPSG on PTB (Miyao et al., 2004) reports the first work of semiautomatically acquiring an English HPSG grammar from the Penn Treebank. Figure 2 demonstrates an HPSG unit presentation (formally called sign), in which head consists of the essential information. As the work of (Miyao et al., 2004) cannot demonstrate an accurate enough HPSG from the entire source constituent treebank, we focus on the core of HPSG sign, HEAD, which is conveniently connected with dependency grammar. For the purpose of accurate HPSG building, in this work, we construct a simplified HPSG only from annotations of PTB by combining constituent and dependency parse trees. 2.1 Tree Preprocessing In standard HPSG relating to HFP, the HEAD value of any headed phrase is structure-shared with the HEAD value of the head daughter. In other words, the phrase in our simplified HPSG tree may be exactly the same as that in a constituent tree and the head word of the phrase corresponding to the parent of the head word of its children in dependency tree3. For example, in the constituent tree of Figure 3(a), Federal Paper Board is a phrase (1, 3) assigned with category NP and in dependency tree, Board is parent of Federal and Paper, thus in our simplified HPSG tree, the head of phrase (1, 3) is Board. 3In standard HPSG, the HEAD value is the part-of-speech of the head word. But in our simplified HPSG tree, we set the head word as HEAD value for convenience. 2398 Federal NNP wood NN products NNS sells VBZ Paper NNP Board NNP paper NN and CC  NP NP VP S  1 2 3 4 5 6 7 8 9 (5,8) (4,8) (1,3) (1,9)  Federal NNP ROOT sells VBZ Paper NNP Board NNP wood NN products NNS paper NN and CC  1 2 3 4 5 6 7 8 9 (a) Constituent and dependency. Federal NNP wood NN products NNS sells VBZ Paper NNP Board NNP paper NN and CC  H-NP NP H-VP H-S  HHHHH- # HH1 2 3 4 5 6 7 8 9 (5,5) (6,6) (1,1) (2,2) (3,3) (4,4) (1,3) (4,8) (1,9) (9,9) (5,8) (8,8) (5,7) (7,7) (b) Division span structure. Federal  NNP wood NN products NNS sells VBZ  Paper NNP Board NNP HEAD sells Categ < S > HEAD sells Categ < VP > HEAD products Categ < NP > HEAD paper Categ < # > paper NN and CC HEAD Board Categ < NP > ROOT 2 1 3 4 5 6 7 8 9 (5,7) (5,8) (4,8) (1,9) (1,3) (c) Joint span structure. Figure 3: Constituent, dependency and two different simplified HPSG structures of the same sentence which is indexed from 1 to 9 and assigned interval range for each node. Dotted box represents the same part. The special category # is assigned to divide the phrase with multiple heads. Division span structure adds token H in front of the category to distinguish whether the phrase is on the left or right of the head. Thus the head is the last one of the category with H which is marked with a box. Joint span structure contains constitute phrase and dependency arc. Categ in each node represents the category of each constituent and HEAD indicates the head word. For dependency parsing on PTB, the dependency structures are mainly obtained by converting constituent structure with three head rules: (1) Penn2Malt4 and the head rules of Yamada and Matsumoto (2003), noted as PTB-YM; (2) LTH Converter5 (Johansson and Nugues, 2007), noted as PTB-LTH; (3) Stanford parser6(De Marneffe et al., 2006), noted as PTB-SD. Following most of the recent work, we apply the PTB-SD representation converted by version 3.3.0 of the Stanford parser. However, this dependency representation results in around 1% of phrases containing two or three head words. As shown in Figure 3(a), the phrase (5,8) assigned with a category NP contains 2 head words of paper and products in dependency tree. In order to deal with the problem, we introduce a special category # to divide the phrase with multiple heads meeting only one head word for each phrase. After 4http://cl.lingfil.uu.se/ nivre/research/Penn2Malt.html 5http://nlp.cs.lth.se/software/treebank converter 6http://nlp.stanford.edu/software/lex-parser.html this conversion, only 50 heads are errors in Penn Treebank. 2.2 Span Representations of HPSG Each node in the HPSG tree noted as AVM represents compound structure. Even in our simplified HPSG, each phrase (span) should be companied with its head. To facilitate the processing of existing parsers, we propose two ways to convert the simplified HPSG into a span-style tree structure. Division Span A phrase is divided into two parts corresponding to left and right of its head. To distinguish the left and right parts, we add a special token H in front of the category to indicate the left span, in which the head of the original phrase is always the last word. Since some leaves of the tree are without category, we explicitly use a special empty category Ø for their representation, and the token H is also applied to the empty category. As shown in Figure 3(b), the head of phrase (1,3) in the dotted box is Board, thus we add the special token H in front of Federal, Paper and 2399 Paper . . . wood Token Representation Self-Attention Layers Decoder Layer Scoring Layer Dependency Score Span Score Simplified HPSG Input Federal 1 1 . . . products NNP NNP NN NNS Federal  NNP wood NN products NNS sells VBZ  Paper NNP Board NNP HEAD sells Categ < S > HEAD sells Categ < VP > HEAD products Categ < NP > HEAD paper Categ < # > paper NN and CC HEAD Board Categ < NP > ROOT 2 1 3 4 5 6 7 8 9 (5,7) (5,8) (4,8) (1,9) (1,3) Figure 4: The framework of our joint span HPSG parsing model. Board category. With this operation, head information has been encoded into span boundary of a standard constituent tree and we only need to parse such a constituent tree. Joint Span We recursively define a structure called joint span to cover both constituent and head information. A joint span consists of all its children phrases and all dependency arcs between heads of all these children phrases. For example, the HPSG node SH(1, 9) in Figure 3(c) as a joint span is: SH(1, 9) = {SH(1, 3), SH(4, 8), SH(9, 9), l(1, 9), d(Board, sells), d(., sells)}, where l(i, j) denotes category of span (i, j) and d(r, h) indicates the dependency between the word r and its parent h. At last, following the recursive definition, the entire HPSG tree T being a joint span can be represented as: SH(T) = {SH(1, 9), d(sells, root)}. As all constituent and head information has been formally encoded into a span-like structure, we can use a constituent-like parser for such a joint span tree. 3 Our Model 3.1 Overview Using an encoder-decoder backbone, our model apply self-attention encoder (Vaswani et al., 2017) which is modified by position partition (Kitaev and Klein, 2018a). Since our two converted structures of simplified HPSG are based on the phrase, thus we can employ CKY-style (Cocke, 1969; Younger, Daniel H., 1975; Kasami, Tadao, 1965) decoder for both to find the tree with the highest predicted scores. The difference is that for division span structure, we only need span scores while for joint span structure, we need both of span and dependency scores. Given a sentence s = {w1, w2, . . . , wn}, we attempt to predict a simplified HPSG tree. As shown in Figure 4, our parsing model includes four modules: token representation, self-attention encoder, scoring module and CKY-style decoder7. 3.2 Token Representation In our model, token representation xi is composed of character, word and part-of-speech (POS) embeddings. For character-level representation, we use CharLSTM (Kitaev and Klein, 2018a). For word-level representation, we concatenate randomly initialized and pre-trained word embeddings. Finally, we concatenate character representation, word representation and POS embedding as our token representation: xi = [xchar; xword; xPOS]. 7For dependency label of each word, it is not necessary for our HPSG parsing purpose, however, to enable our parser fully comparable to existing dependency parsers, we still train a separated multiclass classifier simultaneously with the parser by optimizing the sum of their objectives. 2400 3.3 Self-Attention Encoder The encoder in our model is adapted from (Vaswani et al., 2017) and factor explicit content and position information in the self-attention process. The input matrices X = [x1, x2, . . . , xn] in which xi is concatenated with position embedding are transformed by a self-attention encoder. We factor the model between content and position information both in self-attention sub-layer and feed-forward network, whose setting details follow (Kitaev and Klein, 2018a). 3.4 Decoder for Division Span HPSG After reconstructing of the HPSG tree as a constituent tree with head information as described in Section 2.2, we follow the constituent parsing as (Kitaev and Klein, 2018a; Gaddy et al., 2018) to predict constituent parse tree. Firstly, we add a special empty category Ø to spans to binarize the n-ary nodes and apply a unary atomic category to deal with the nodes of the unary chain, corresponding to nested spans with the same endpoints. Then, we train the span scorer. Span vector sij is the concatenation of the vector differences sij = [−→ yj −−−→ yi−1; ←−− yj+1 −←− yi] which −→ yj is constructed by splitting in half the outputs from the self-attention encoder. We apply one-layer feedforward networks to generate span scores vector, taking span vector sij as input: S(i, j) = W2g(LN(W1sij + b1)) + b2, where LN denotes Layer Normalization, g is the Rectified Linear Unit nonlinearity. The individual score of category ℓis denoted by Scateg(i, j, ℓ) = [S(i, j)]ℓ, where []ℓindicates the value of corresponding the element ℓof the score vector. The score s(T) of the constituent parse tree T is to sum every scores of span (i, j) with category ℓ: s(T) = X (i,j,ℓ)∈T Scateg(i, j, ℓ). The goal of constituent parsing is to find the tree with the highest score: ˆT = arg maxT s(T). We use CKY-style algorithm (Stern et al., 2017a; Gaddy et al., 2018) to obtain the tree ˆT in O(n3) time complexity. This structured prediction problem is handled with satisfying the margin constraint: s(T ∗) ≥s(T) + ∆(T, T ∗), where T ∗denotes correct parse tree and ∆is the Hamming loss on category spans with a slight modification during the dynamic programming search. The objective function is the hinge loss, J1(θ) = max(0, max T [s(T)+∆(T, T ∗)]−s(T ∗)). For dependency labels, following (Dozat and Manning, 2017), the classifier takes head and its children as features. We minimize the negative log probability of the correct dependency label li for the child-parent pair (xi, hi) implemented as cross-entropy loss: Jlabels(θ) = −logPθ(li|xi, hi). Thus, the overall loss is sum of the objectives: JDivision(θ) = J1(θ) + Jlabels(θ). 3.5 Decoder for Joint Span HPSG As our joint span is defined in a recursive way, to score the root joint span has been equally scoring all spans and dependencies in the HPSG tree. For span scores, we continuously apply the approach and hinge loss J1(θ) in the previous section. For dependency scores, we predict a distribution over the possible head for each word and use the biaffine attention mechanism (Dozat and Manning, 2017) to calculate the score as follow: αij = hT i Wgj + U T hi + V T gj + b, where αij indicates the child-parent score, W denotes the weight matrix of the bi-linear term, U and V are the weight vectors of the linear term and b is the bias item, hi and gi are calculated by a distinct one-layer perceptron network. We minimize the negative log-likelihood of the golden dependency tree Y , which is implemented as a cross-entropy loss: J2(θ) = −(logPθ(hi|xi) + logPθ(li|xi, hi)) , where Pθ(hi|xi) is the probability of correct parent node hi for xi, and Pθ(li|xi, hi) is the probability of the correct dependency label li for the 2401 Algorithm 1 Joint span parsing algorithm Input: sentence leng n, span and dependency score s(i, j, ℓ), d(r, h), 1 ≤i ≤j ≤n, ∀r, h, ℓ Output: maximum value SH(T) of tree T Initialization: sc[i][j][h] = si[i][j][h] = 0, ∀i, j, h for len = 1 to n do for i = 1 to n −len + 1 do j = i + len −1 if len = 1 then sc[i][j][i] = si[i][j][i] = max ℓ s(i, j, ℓ) else for h = i to j do splitl = max i≤r<h { max r≤k<h { sc[i][k][r]+ si[k + 1][j][h] } + d(r, h) } splitr = max h<r≤j { max h≤k<r { si[i][k][h]+ sc[k + 1][j][r] } + d(r, h) } sc[i][j][h] = max { splitl, splitr }+ max ℓ̸=∅s(i, j, ℓ) si[i][j][h] = max { splitl, splitr }+ max ℓ s(i, j, ℓ) end for end if end for end for SH(T) = max 1≤h≤n { sc[1][n][h] + d(h, root) } child-parent pair (xi, hi). To predict span and dependency scores simultaneously, we jointly train our parser for minimizing the overall loss: JJoint(θ) = J1(θ) + J2(θ). During testing, we propose a CKY-style algorithm as shown in Algorithm 1 to explicitly find the globally highest span and dependency score SH(T) of our simplified HPSG tree T. In order to binarize the constituent parse tree with head, we introduce the complete span sc and the incomplete span si which is similar to Eisner algorithm (Eisner, 1996). After finding the best score SH(T), we backtrack the chart with split point k and sub-root r to construct the simplified HPSG tree T. Comparing with constituent parsing CKY-style algorithm (Stern et al., 2017a), the dependency score d(r, h) in our algorithm affects the selection of best split point k. Since we need to find the best value of sub-head r and split point k, the complexity of the algorithm is O(n5) time and O(n3) space. To control the effect of combining span and dependency scores, we apply a weight λ: s(i, j, ℓ) = λScateg(i, j, ℓ), d(i, j) = (1.0−λ)αij, where λ in the range of 0 to 1. In addition, we can merely generate constituent or dependency parsing tree by setting λ to 1 or 0, respectively. 4 Experiments In order to evaluate the proposed model, we convert our simplified HPSG tree to constituent and dependency parse trees and evaluate on two benchmark treebanks, English Penn Treebank (PTB) and Chinese Penn Treebank (CTB5.1) following standard data splitting (Zhang and Clark, 2008; Liu and Zhang, 2017b). The placeholders with the -NONE- tag are stripped from the CTB. POS tags are predicted using the Stanford tagger (Toutanova et al., 2003) and we use the same pretagged dataset as (Cross and Huang, 2016). For constituent parsing, we use the standard evalb8 tool to evaluate the F1 score. For dependency parsing, following (Dozat and Manning, 2017; Kuncoro et al., 2016; Ma et al., 2018), we report the results without punctuations for both treebanks. 4.1 Setup Hyperparameters In our experiments, we use 100D GloVe (Pennington et al., 2014) and structured-skipgram (Ling et al., 2015) pre-train embeddings for English and Chinese respectively. The character representations are randomly initialized, and the dimension is 64. For self-attention encoder, we use the same hyperparameters settings as (Kitaev and Klein, 2018a). For span scores, we apply a hidden size of 250-dimensional feed-forward networks. For dependency biaffine scores, we employ two 1024dimensional MLP layers with the ReLU as the activation function and a 1024-dimensional parameter matrix for biaffine attention. In addition, we augment our parser with ELMo (Peters et al., 2018) and a larger version of BERT (Devlin et al., 2018) (24 layers, 16 attention heads per layer, and 1024-dimensional hidden vectors) to compare with other pre-trained or ensemble models. We set 4 layers of self-attention for ELMo and 2 layers of self-attention for BERT as (Kitaev and Klein, 2018a,b). 8http://nlp.cs.nyu.edu/evalb/ 2402 Self-attention Layers F1 UAS LAS Division Span Model 8 self-attention layers 93.42 94.05 92.68 12 self-attention layers 93.57 94.40 93.05 16 self-attention layers 93.36 94.08 92.66 Joint Span Model 8 self-attention layers 93.64 95.75 94.36 12 self-attention layers 93.78 95.92 94.49 16 self-attention layers 93.54 95.54 94.21 Table 1: Different self-attention layers on English dev set. Training Details we use 0.33 dropout for biaffine attention and MLP layers. All models are trained for up to 150 epochs with batch size 150 on a single NVIDIA GeForce GTX 1080Ti GPU with Intel i7-7800X CPU. We use the same training settings as (Kitaev and Klein, 2018a) and (Kitaev and Klein, 2018b) if use BERT. 4.2 Self-attention Layers This subsection examines the impact of different numbers of self-attention layers varying from 8 to 16. The comparison in Table 1 indicates that the best performing setting comes from 12 selfattention layers, and more than 12 layers shows almost no promotion even reduces the accuracy. Thus the rest experiments are done with 12 layers of the self-attention encoder. 4.3 Moderating constituent and Dependency The weight parameter λ plays an important role to balance the scoring of span and dependency. When λ set to 0, indicates only using dependency score to generate dependency tree as the general first-order dependency parsing (Eisner, 1996), while λ set to 1, shows the constituent parsing only. λ set to between 0 to 1 indicates our general simplified HPSG parsing, providing both constituent and dependency structure prediction. The comparison in Figure 5 shows that our HPSG decoder is better than either separate constituent or dependency decoder, which shows the bonus of joint predicting constituent and dependency. Moreover, λ set to 0.5 achieves the best performance in terms of both F1 score and UAS. 4.4 Joint Span HPSG Parsing We compare our join span HPSG parser with a separate learning constituent parsing model which Model F1 UAS LAS separate constituent 93.47 converted dependency 95.06 93.81 joint span λ = 1.0 93.67 joint span λ = 0.0 95.82 94.43 joint span λ = 0.5 93.78 95.92 94.49 converted dependency 95.69 94.45 Table 2: English dev set performance of joint span HPSG parsing. The converted means the corresponding dependency parsing results are from the corresponding constituent parse tree using head rules. Figure 5: Balancing constituent and dependency of joint span HPSG parsing on English dev set. takes the same token representation and selfattention encoder on PTB dev set. The constituent parsing results are also converted into dependency ones by PTB-SD for comparison. When λ is set to 0 and 1, our joint span HPSG parser works as the dependency-only parser and constituent-only parser respectively. Table 2 shows that even in such a work mode, our HPSG parser still outperforms the separate constituent parser in terms of either constituent and dependency parsing performance. As λ is set to 0.5, our HPSG parser will give constituent and dependency structures at the same time, which are shown better than the work alone mode of either constituent or dependency parsing. Besides, the comparison also shows that the directly predicted dependencies from our model are slightly better than those converted from the predicted constituent parse trees. 4.5 Parsing Speed We compare the parsing speed of our parser with other neural parsers in Table 4. Although the 2403 Model sents/sec Petrov and Klein (2007) 6.2 Zhu et al. (2013) 89.5 Liu and Zhang (2017b) 79.2 Stern et al. (2017a) 75.5 Shen et al. (2018) 111.1 Shen et al. (2018)(w/o tree inference) 351 Our (Division) 226.3 Our (Joint) 158.7 Table 3: Parsing speed on the PTB dataset. Model English Chinese UAS LAS UAS LAS Chen and Manning (2014) 91.8 89.6 83.9 82.4 Andor et al. (2016) 94.61 92.79 Zhang et al. (2016) 93.42 91.29 87.65 86.17 Cheng et al. (2016) 94.10 91.49 88.1 85.7 Kuncoro et al. (2016) 94.26 92.06 88.87 87.30 Ma and Hovy (2017) 94.88 92.98 89.05 87.74 Dozat and Manning (2017) 95.74 94.08 89.30 88.23 Li et al. (2018a) 94.11 92.08 88.78 86.23 Ma et al. (2018) 95.87 94.19 90.59 89.29 Our (Division) 94.32 93.09 89.14 87.31 Our (Joint) 96.09 94.68 91.21 89.15 Our (Division*) 91.69 90.54 Our (Joint*) 93.24 91.95 Pre-training/Ensemble Choe and Charniak (2016) 95.9 94.1 Kuncoro et al. (2017) 95.8 94.6 Wang et al. (2018b)(ELMo) 96.35 95.25 Our (Division) + ELMo 95.77 94.21 Our (Joint) + ELMo 96.76 94.93 Our (Division) + BERT 96.22 94.56 Our (Joint) + BERT 97.00 95.43 Table 4: Dependency parsing on PTB and CTB test set. * represents CTB constituent data splitting. time complexity of our Joint span model is O(n5), there is not much slower than Division span model with O(n3) time complexity. The comparison suggests that training and inference times are dominated by neural network computations and our decoder consumes a small fraction of total running time. 4.6 Main Results Tables 4, 5 and 6 compare our model to existing state-of-the-art on test sets. Division and Joint indicate the results of division and joint span parsing respectively. On PTB, our best model achieves new state-of-the-art on both constituent and dependency parsing. On CTB, our best model achieves 92.18 F1 score of constituent parsing and 91.21% UAS and 89.15% LAS of dependency parsing. Since constituent and dependency parsModel LR LP F1 Zhu et al. (2013) 90.7 90.2 90.4 Dyer et al. (2016) 89.8 Cross and Huang (2016) 90.5 92.1 91.3 Stern et al. (2017a) 93.2 90.3 91.8 Gaddy et al. (2018) 91.76 92.41 92.08 Stern et al. (2017b) 92.57 92.56 92.56 Kitaev and Klein (2018a) 93.20 93.90 93.55 Our (Division) 93.41 93.87 93.64 Our (Joint) 93.64 93.92 93.78 Pre-training/Ensemble Dyer et al. (2016) 93.3 Choe and Charniak (2016) 93.8 Liu and Zhang (2017a) 94.2 Fried et al. (2017) 94.66 Kitaev and Klein (2018a) + ELMo 94.85 95.40 95.13 Kitaev and Klein (2018b) + BERT 95.46 95.73 95.59 Kitaev and Klein (2018b) 95.51 96.03 95.77 Our (Division) + ELMo 94.54 95.68 95.10 Our (Joint) + ELMo 95.04 95.39 95.22 Our (Division) + BERT 95.51 95.93 95.72 Our (Joint) + BERT 95.70 95.98 95.84 Table 5: Constituent parsing on PTB test set. ing have different data splitting on CTB (Zhang and Clark, 2008; Liu and Zhang, 2017b), we report our parsing performance on both data splitting. The comparison shows that our HPSG parsing model is more effective than learning constituent or dependency parsing separately. We also find that dependency parsing is shown much more beneficial from Joint than Division way which empirically suggests dependency score in our joint loss is helpful. We augment our parser with ELMo and a larger version of BERT as the sole token representation to compare with other models. Our Joint model in BERT setting even defeats other ensemble models of both constituent and dependency parsing achieving 95.84 F1 score, 97.00% UAS and 95.43% LAS. 5 Related Work In the earlier time, linguists and NLP researchers discussed how to encode lexical dependencies in phrase structures, like lexicalized tree adjoining grammar (LTAG) (Schabes et al., 1988) and headdriven phrase structure grammar (HPSG) (Pollard and Sag, 1994) which is a constraint-based highly lexicalized non-derivational generative grammar 2404 Model LR LP F1 Wang et al. (2015) 83.2 Dyer et al. (2016) 84.6 Liu and Zhang (2017b) 85.9 85.2 85.5 Liu and Zhang (2017a) 86.1 Shen et al. (2018) 86.6 86.4 86.5 Fried and Klein (2018) 87.0 Teng and Zhang (2018) 87.1 87.5 87.3 Kitaev and Klein (2018b) 91.55 91.96 91.75 Our (Division) 91.14 93.09 92.10 Our (Joint) 92.03 92.33 92.18 Our (Division*) 90.07 91.68 90.87 Our (Joint*) 90.91 91.16 91.03 Table 6: Constituent parsing on CTB test set. * represents CTB dependency data splitting. framework. In the past decade, there was a lot of largescale HPSG-based NLP parsing systems which had been built. Such as the Enju English Chinese parser (Miyao et al., 2004; Yu et al., 2010), the Alpino parser for Dutch (Van Noord et al., 2006), and the LKB & PET (Copestake, 2002; Callmeier, 2000) for English, German, and Japanese.. Meanwhile, since HPSG represents the grammar framework in a precisely constrained way, it is difficult to broadly cover unseen real-world texts for parsing. Consequently, according to (Zhang and Krieger, 2011), many of these large-scale grammar implementations are forced to choose to either compromise the linguistic preciseness or to accept the low coverage in parsing. Previous works of HPSG approximation focus on two major approaches: grammar based approach (Kiefer and Krieger, 2004), and the corpus-driven approach (Krieger, 2007) and (Zhang and Krieger, 2011) which proposes PCFG approximation as a way to alleviate some of these issues in HPSG processing. Recently, with the impressive success of deep neural networks in a wide range of NLP tasks (Li et al., 2018b; Zhang et al., 2018a; Li et al., 2018c; Zhang et al., 2018c,b; Zhang and Zhao, 2018; Cai et al., 2018; He et al., 2018; Xiao et al., 2019; Chen et al., 2018; Wang et al., 2018; Wang et al., 2018a, 2017b,a), constituent and dependency parsing have been well developed with neural network. These models attain state-of-the-art results for dependency parsing (Chen and Manning, 2014; Dozat and Manning, 2017; Ma et al., 2018) and constituent parsing (Dyer et al., 2016; Cross and Huang, 2016; Kitaev and Klein, 2018a). Since constituent and dependency share a lot of grammar and machine learning characteristics, it is a natural idea to study the relationship between constituent and dependency structures, and the joint learning of constituent and dependency parsing (Collins, 1997; Charniak, 2000; Charniak and Johnson, 2005; Farkas et al., 2011; Green and ˇZabokrtsk´y, 2012; Ren et al., 2013; Yoshikawa et al., 2017). To further exploit both strengths of the two representation forms, in this work, for the first time, we propose a graph-based parsing model that formulates constituent and dependency structures as simplified HPSG. 6 Conclusions This paper presents a simplified HPSG with two different decode methods which are evaluated on both constituent and dependency parsing. Despite the usefulness of HPSG in practice and its theoretical linguistic background, our model achieves new state-of-the-art results on both Chinese and English benchmark treebanks of both parsing tasks. Thus, this work is more than proposing a high-performance parsing model by exploring the relation between constituent and dependency structures. Our experiments show that joint learning of constituent and dependency is indeed superior to separate learning mode, and combining constituent and dependency score in joint training to parse a simplified HPSG can obtain further performance improvement. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally Normalized Transition-Based Neural Networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2442–2452. Joan Bresnan, Ash Asudeh, Ida Toivonen, and Stephen Wechsler. 2015. Lexical-functional syntax, volume 16. John Wiley & Sons. Jiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A Full End-to-End Semantic Role Labeler, Syntactic-agnostic Over Syntactic-aware? In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 2753– 2765. Ulrich Callmeier. 2000. PET - A platform for experimentation with efficient HPSG processing tech2405 niques. Natural Language Engineering, 6(1):99– 107. Eugene Charniak. 2000. A Maximum-EntropyInspired Parser. In 1st Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL). Eugene Charniak and Mark Johnson. 2005. Coarseto-Fine n-Best Parsing and MaxEnt Discriminative Reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 173–180. Danqi Chen and Christopher Manning. 2014. A Fast and Accurate Dependency Parser using Neural Networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018. Syntax-Directed Attention for Neural Machine Translation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4792–4799. Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. 2016. Bi-directional Attention with Agreement for Dependency Parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2204–2214. Do Kook Choe and Eugene Charniak. 2016. Parsing as Language Modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2331–2336. N. Chomsky. 1981. Lectures on Government and Binding. Mouton de Gruyter. John Cocke. 1969. Programming Languages and Their Compilers: Preliminary Notes. New York University. Michael Collins. 1997. Three Generative, Lexicalised Models for Statistical Parsing. In 35th Annual Meeting of the Association for Computational Linguistics (ACL). Ann Copestake. 2002. Implementing Typed Feature Structure Grammars, volume 110. CSLI publications Stanford. James Cross and Liang Huang. 2016. Span-Based Constituency Parsing with a Structure-Label System and Provably Optimal Dynamic Oracles . In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1–11. Marie-Catherine De Marneffe, Bill MacCartney, Christopher D Manning, et al. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Lrec, volume 6, pages 449–454. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. abs/1810.04805. Timothy Dozat and Christopher D Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. arXiv preprint arXiv:1611.01734. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent Neural Network Grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), pages 199–209. Jason Eisner. 1996. Efficient Normal-Form Parsing for Combinatory Categorial Grammar. In 34th Annual Meeting of the Association for Computational Linguistics (ACL). Rich`ard Farkas, Bernd Bohnet, and Helmut Schmid. 2011. Features for Phrase-Structure Reranking from Dependency Parses. In Proceedings of the 12th International Conference on Parsing Technologies, pages 209–214. Daniel Fried and Dan Klein. 2018. Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 469–476. Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving Neural Parsing by Disentangling Model Combination and Reranking Effects . In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 161– 166. David Gaddy, Mitchell Stern, and Dan Klein. 2018. What’s Going On in Neural Constituency Parsers? An Analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL: HLT), pages 999– 1010. Nathan Green and Zdenˇek ˇZabokrtsk´y. 2012. Hybrid Combination of Constituency and Dependency Trees into an Ensemble Dependency Parser. In Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data, pages 19–26. Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for Semantic Role Labeling, To Be, Or Not To Be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2061–2071. Richard Johansson and Pierre Nugues. 2007. Extended Constituent-to-Dependency Conversion for English. In Proceedings of the 16th Nordic Conference of Computational Linguistics (NODALIDA 2007), pages 105–112. 2406 Kasami, Tadao. 1965. An Efficient Recognition and Syntax-Analysis Algorithm for Context-Free Languages. Technical Report Air Force Cambridge Research Lab. Bernd Kiefer and Hans-Ulrich Krieger. 2004. A Context-Free Superset Approximation of Unification-Based Grammars. In New developments in parsing technology, pages 229–250. Nikita Kitaev and Dan Klein. 2018a. Constituency Parsing with a Self-Attentive Encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2676– 2686. Nikita Kitaev and Dan Klein. 2018b. Multilingual Constituency Parsing with Self-Attention and PreTraining. arXiv preprint arXiv:1812.11760. Hans-Ulrich Krieger. 2007. From UBGs to CFGs A practical corpus-driven approach. Natural Language Engineering, 13(4):317–351. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What Do Recurrent Neural Network Grammars Learn About Syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers (EACL), pages 1249–1258. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an Ensemble of Greedy Dependency Parsers into One MST Parser. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1744–1753. Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018a. Seq2seq Dependency Parsing. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 3203– 3214. Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, and Luo Si. 2018b. A Unified Syntax-aware Framework for Semantic Role Labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2401–2411. Zuchao Li, Shexia He, Zhuosheng Zhang, and Hai Zhao. 2018c. Joint Learning of POS and Dependencies for Multilingual Universal Dependency Parsing. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies (CONLL), pages 65–73. Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/Too Simple Adaptations of Word2Vec for Syntax Problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL: HLT), pages 1299–1304. Jiangming Liu and Yue Zhang. 2017a. In-Order Transition-based Constituent Parsing. Transactions of the Association for Computational Linguistics (TACL), 5:413–424. Jiangming Liu and Yue Zhang. 2017b. Shift-Reduce Constituent Parsing with Neural Lookahead Features. Transactions of the Association for Computational Linguistics (TACL), 5:45–58. Xuezhe Ma and Eduard Hovy. 2017. Neural Probabilistic Model for Non-projective MST Parsing. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP), pages 59–69. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. StackPointer Networks for Dependency Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1403–1414. Xuezhe Ma, Xiaotian Zhang, Hai Zhao, and Bao-Liang Lu. 2010. Dependency Parser for Chinese Constituent Parsing. In CIPS-SIGHAN Joint Conference on Chinese Language Processing (CLP). Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2). Yusuke Miyao, Takashi Ninomiya, and Jun’ichi Tsujii. 2004. Corpus-Oriented Grammar Development for Acquiring a Head-Driven Phrase Structure Grammar from the Penn Treebank. In International Conference on Natural Language Processing (IJCNLP), pages 684–693. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL: HLT), pages 2227–2237. Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference (NAACL), pages 404–411. Carl Pollard and Ivan A Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press. 2407 Xiaona Ren, Xiao Chen, and Chunyu Kit. 2013. Combine Constituent and Dependency Parsing via Reranking. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI), pages 2155–2161. Yves Schabes, Anne Abeille, and Aravind K. Joshi. 1988. Parsing strategies with ’lexicalized’ grammars: Application to tree adjoining grammars. In Coling Budapest 1988 Volume 2: International Conference on Computational Linguistics (COLING). Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. 2018. Straight to the Tree: Constituency Parsing with Neural Syntactic Distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1171–1180. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017a. A Minimal Span-Based Neural Constituency Parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 818–827. Mitchell Stern, Daniel Fried, and Dan Klein. 2017b. Effective Inference for Generative Neural Parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1695–1700. Zhiyang Teng and Yue Zhang. 2018. Two Local Models for Neural Constituent Parsing. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 119–132. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich Part-ofspeech Tagging with a Cyclic Dependency Network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL), pages 173–180. Gertjan Van Noord et al. 2006. At Last Parsing is Now Operational. In TALN06. Verbum Ex Machina. Actes de la 13e conference sur le traitement automatique des langues naturelles, pages 20–42. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems (NIPS), pages 5998–6008. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017a. Sentence Embedding for Neural Machine Translation Domain Adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 560–566. Rui Wang, Masao Utiyama, Andrew Finch, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2018. Sentence Selection and Weighting for Neural Machine Translation Domain Adaptation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(10):1727–1741. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017b. Instance Weighting for Neural Machine Translation Domain Adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1482–1488. Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2018a. Dynamic Sentence Sampling for Efficient Training of Neural Machine Translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 298–304. Wenhui Wang, Baobao Chang, and Mairgup Mansur. 2018b. Improved Dependency Parsing using Implicit Word Connections Learned from Unlabeled Data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2857–2863. Zhiguo Wang, Haitao Mi, and Nianwen Xue. 2015. Feature Optimization for Constituent Parsing via Neural Networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACLIJCNLP), pages 1138–1147. Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, and Kehai Chen. 2019. Lattice-Based Transformer Encoder for Neural Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Masashi Yoshikawa, Hiroshi Noji, and Yuji Matsumoto. 2017. A* CCG Parsing with a Supertag and Dependency Factored Model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 277–287. Younger, Daniel H. 1975. Recognition and Parsing of Context-Free Languages in Time n3.Information Control, 10(2) : 189 −−208. Kun Yu, Yusuke Miyao, Xiangli Wang, Takuya Matsuzaki, and Junichi Tsujii. 2010. Semiautomatically developing Chinese HPSG grammar from the Penn Chinese treebank for deep parsing. In Coling 2010: Posters, pages 1417–1425. Yi Zhang and Hans-Ulrich Krieger. 2011. Large-Scale Corpus-Driven PCFG Approximation of an HPSG. In Proceedings of the 12th International Conference on Parsing Technologies, pages 198–208. Yue Zhang and Stephen Clark. 2008. A Tale of Two Parsers: Investigating and Combining Graph-based and Transition-based Dependency Parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 562–571. 2408 Zhisong Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Hai Zhao. 2018a. Exploring Recombination for Efficient Decoding of Neural Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4785–4790. Zhisong Zhang, Hai Zhao, and Lianhui Qin. 2016. Probabilistic Graph-based Dependency Parsing with Convolutional Neural Network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1382–1392. Zhuosheng Zhang, Yafang Huang, and Hai Zhao. 2018b. Subword-augmented Embedding for Cloze Reading Comprehension. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 1802–1814. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018c. Modeling Multiturn Conversation with Deep Utterance Aggregation. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 3740–3752. Zhuosheng Zhang and Hai Zhao. 2018. One-shot Learning for Question-Answering in Gaokao History Challenge. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 449–461. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and Accurate ShiftReduce Constituent Parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 434–443.
2019
230
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2409–2419 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2409 Distantly Supervised Named Entity Recognition using Positive-Unlabeled Learning Minlong Peng∗, Xiaoyu Xing∗, Qi Zhang, Jinlan Fu, Xuanjing Huang School of Computer Science, Fudan University, Shanghai, China {mlpeng16,xyxing18,qz,fujl16,xjhuang}@fudan.edu.cn Abstract In this work, we explore the way to perform named entity recognition (NER) using only unlabeled data and named entity dictionaries. To this end, we formulate the task as a positive-unlabeled (PU) learning problem and accordingly propose a novel PU learning algorithm to perform the task. We prove that the proposed algorithm can unbiasedly and consistently estimate the task loss as if there is fully labeled data. A key feature of the proposed method is that it does not require the dictionaries to label every entity within a sentence, and it even does not require the dictionaries to label all of the words constituting an entity. This greatly reduces the requirement on the quality of the dictionaries and makes our method generalize well with quite simple dictionaries. Empirical studies on four public NER datasets demonstrate the effectiveness of our proposed method. We have published the source code at https:// github.com/v-mipeng/LexiconNER. 1 Introduction Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product and organization names in unstructured text. It is a fundamental component in many natural language processing tasks such as machine translation (Babych and Hartley, 2003), knowledge base construction (Riedel et al., 2013; Shen et al., 2012), automatic question answering (Bordes et al., 2015), search (Zhu et al., 2005), etc. In this field, supervised methods, ranging from the typical graph models (Zhou and Su, 2002; McCallum et al., 2000; McCallum and Li, 2003; Settles, 2004) to current popular neural-networkbased models (Chiu and Nichols, 2016; Lample et al., 2016; Gridach, 2017; Liu et al., 2018; Zhang ∗Equal contribution. Dictionary Simons David Anna Joe Bobick was managed by weight legend Joe Frazier Figure 1: Data labeling example for person names using our constructed dictionary. and Yang, 2018), have achieved great success. However, these supervised methods often require large scale fine-grained annotations (label every word of a sentence) to generalize well. This makes it hard to apply them to label-few domains, e.g., bio/medical domains (Del˙eger et al., 2016). In this work, we explore the way to perform NER using only unlabeled data and named entity dictionaries, which are relatively easier to obtain compared with labeled data. A natural practice to perform the task is to scan through the query text using the dictionary and treat terms matched with a list of entries of the dictionary as the entities (Nadeau et al., 2006; Gerner et al., 2010; Liu et al., 2015; Yang et al., 2018). However, this practice requires very high quality named entity dictionaries that cover most of entities, otherwise it will fail with poor performance. As shown in Figure 1, the constructed dictionary of person names only labels one entity within the query text, which contains two entities “Bobick” and “Joe Frazier”, and it only labels one word “Joe” out of the two-word entity “Joe Frazier”. To address this problem, an intuitive solution is to further perform supervised or semi-supervised learning using the dictionary labeled data. However, since it does not guarantee that the dictionary covers all entity words (words being of entities) within a sentence, we cannot simply treat a word 2410 not labeled by the dictionary as the non-entity word. Take the data labeling results depicted in Figure 1 as an example. Simply treating “Bobick” and “Frazier” as non-entity words and then performing supervised learning will introduce label noise to the supervised classifier. Therefore, when using the dictionary to perform data labeling, we can actually only obtain some entity words and a bunch of unlabeled data comprising of both entity and non-entity words. In this case, the conventional supervised or semi-supervised learning algorithms are not suitable, since they usually require labeled data of all classes. With this consideration, we propose to formulate the task as a positive-unlabeled (PU) learning problem and accordingly introduce a novel PU learning algorithm to perform the task. In our proposed method, the labeled entity words form the positive (P) data and the rest form the unlabeled (U) data for PU learning. We proved that the proposed algorithm can unbiasedly and consistently estimate the task loss as if there is fully labeled data, under the assumption that the labeled P data can reveal the data distribution of class P. Of course, since words labeled by the dictionary only cover part of entities, it cannot fully reveal data distribution of entity words. To deal with this problem, we propose an adapted method, motivated by the AdaSampling algorithm (Yang et al., 2017), to enrich the dictionary. We evaluate the effectiveness of our proposed method on four NER datasets. Experimental results show that it can even achieve comparable performance with several supervised methods, using quite simple dictionaries. Contributions of this work can be summarized as follows: 1) We proposed a novel PU learning algorithm to perform the NER task using only unlabeled data and named entity dictionaries. 2) We proved that the proposed algorithm can unbiasedly and consistently estimate the task loss as if there is fully labeled data, under the assumption that the entities found out by the dictionary can reveal the distribution of entities. 3) To make the above assumption hold as far as possible, we propose an adapted method, motivated by the AdaSampling algorithm, to enrich the dictionary. 4) We empirically prove the effectiveness of our proposed method with extensive experimental studies on four NER datasets. 2 Preliminaries 2.1 Risk Minimization Let X ∈X and Y ∈Y be the input and output random variables, where X ⊂Rd and Y = {0, 1} denote the space of X and Y, respectively. Let f : X →R denote a classifier. A loss function is a map ℓ: R × Y →R+. Given any loss function ℓand a classifier f, we define the ℓ-risk of f by: Rℓ(f) = EX,Yℓ(f(x), yx) (1) where E denotes the expectation and its subscript indicates the random variables with respect to which the expectation is taken. In ordinary supervised learning, we estimate Rℓwith the empirical loss ˆRℓ: ˆRℓ= 1 n n X i=1 ℓ(f(xi), yi), (2) and update model parameters to learn a classifier f∗that minimizes ˆRℓ: f∗= arg min f ˆRℓ(f). (3) 2.2 Unbiased Positive-Unlabeled learning Unbiased positive-unlabeled learning (uPU) (du Plessis et al., 2014) aims to estimate Rℓwhen there are only a set of positive (P) examples and a set of unlabeled (U) examples, which contains both positive and negative examples. Rℓcan also be formulated by: Rℓ= πnEX,Y=0ℓ(f(x), 0)+πpEX,Y=1ℓ(f(x), 1), (4) where πp = P(Y = 1) and πn = P(Y = 0). Note that EX,Y=1ℓ(f(x), 1) can be effectively estimated using positive data. Therefore, the main problem of PU learning is how to estimate EX,Y=0ℓ(f(x), 0) without using negative labeled data. To this end, it further formulates: πnEX,Y=0ℓ(f(x), 0) = EXℓ(f(x), 0) −πpEX,Y=1ℓ(f(x), 0). This equation holds because: P(Y = 0)P(X|Y = 0) = P(X) −P(Y = 1)P(X|Y = 1). According to this equation, we can now estimate πnEX,Y=0ℓ(f(x), 0) using only unlabeled data and positive data. Thus, Rℓcan be effectively 2411 estimated using only unlabeled data and positive data. In summary, we have that Rℓcan be unbiasedly estimated by: ˆRℓ= 1 nu nu X i=1 ℓ(f(xu i ), 0)+ πp np np X i=1 (ℓ(f(xp i ), 1) −ℓ(f(xp i ), 0)) , (5) where xu i and xp i denotes an unlabeled and positive example, respectively, and nu and np denotes the number of unlabeled and positive examples, respectively. 2.3 Consistent Positive-Unlabeled Learning As we know, a good estimation should be not only unbiased but also consistent. The above induction has proved that ˆRℓis an unbiased estimation of Rℓ. In this section, we show that ˆRℓcan be also a consistent estimation of Rℓwhen the loss function ℓis upper bounded. We argue that this is the first work to give such a proof, which is summarized in the following theorem: Theorem 1. If ℓis bounded by [0, M], then for any ϵ > 0, P{S ∈D| sup f∈HR |Rℓ−ˆRℓ| ≤ϵ} ≥1 −2N( ϵ 4(1 + 2πp)LM )e −min(np,nu)ϵ2 8(1+2πp)2B2 , (6) where B = LMM + C0. Here, LM denotes the Lipschitz constant that LM > ∂ℓ(w,y) ∂w , ∀w ∈R, C0 = maxy ℓ(0, y), and H denotes a Reproducing Kernel Hilbert Space (RKHS) (Aronszajn, 1950). HR is the hypothesis space for each given R > 0 in the ball of radius R in H. N(ϵ) denotes the covering number of HR following Theorem C in (Cucker and Smale, 2002). Proof. Proof appears in Appendix A. Remark 1. Let us intuitively think about what if ℓ is not upper bounded (e.g., the cross entropy loss function). Suppose that there is a positive example xp i not occurring in the unlabeled data set. Then, its corresponding risk defined in ˆRℓis V (xp i ) = πp np (ℓ(f(xp i ), 1) −ℓ(f(xp i ), 0)). If ℓis not upper bounded, to achieve a small value of V (xp i ), f can heavily overfit xp i making ℓ(f(xp i ), 0) →+∞, and in turn V (xp i ) →−∞. From this analysis, we can expect that, when using a unbounded loss function and a flexible classifier, ˆRℓwill dramatically decrease to a far below zero value. Therefore, in this work, we force ℓto be bounded by replacing the common unbounded cross entropy loss function with the mean absolute error, resulting in a bounded unbiased positiveunlabeled learning (buPU) algorithm. This slightly differs from the setting of uPU, which only requires ℓto be symmetric. We further combine buPU with the nonnegative constraint proposed by Kiryo et al. (2017), which has proved to be effectiveness in alleviating overfitting, obtaining a bounded non-negative positive-unlabeled learning (bnPU) algorithm: ˆRℓ= πp np np X i=1 ℓ(f(xp i ), 1)+ max 0, 1 nu nu X i=1 ℓ(f(xu i ), 0) −πp np np X i=1 ℓ(f(xp i ), 0) ! . (7) 3 Dictionary-based NER with PU Learning In the following, we first define some notations used throughout this work, and illustrate the label assignment mechanism used in our method. Then, we precisely illustrate the data labeling process using the dictionary. After that, we show the detail for building the PU classifier, including word representation, loss definition, and label inference. Finally, we show the adapted method for enriching the dictionary. 3.1 Notations We denote W ∈V and S = {W} ∈S be the word-level and sentence-level input random variables, where V is the word vocabulary and S is the sentence space. De denotes the entity dictionary for a given entity type and D = {s1, · · · , sN} ⊆S denotes the unlabelled dataset. We denote D+ the set of entity words labeled by De, and denote Du the rest unlabeled words. 3.2 Label Assignment Mechanism In this work, we apply the binary label assignment mechanism for the NER task instead of the prevalent BIO or BIOES mechanism. Entity words are mapped to the positive class and nonentity words are mapped to the negative class. This is because, as we have discussed in the §1, the dictionary cannot guarantee to cover all entity words within a sentence. It may only label the beginning (B), the internal (I), or the last (E) word 2412 Algorithm 1 Data Labeing using the Dictionary 1: Input: named entity dictionary De, a sentence s = {w1, · · · , wn}, and the context size k 2: Result: partial labeled sentence 3: Initialize: i ←1 4: while i ≤n do 5: for j ∈[k, · · · , 0] do 6: if {wi, · · · , wmax(i+j,n)} ∈De then 7: label {wi, · · · , wmax(i+j,n)} as positive class. 8: i ←i + j + 1 9: break 10: if j == 0 then 11: i ←i + 1 of an entity. Therefore, we cannot distinguish which type, B, I, or E, the labeled entity word belongs to. Take the data labeling results depicted in Figure 1 as an example. With the dictionary, we know that “Joe” is an entity word. However we cannot know that it is the beginning of the person name “Joe Frazier”. 3.3 Data Labeling using the Dictionary To obtain D+, we use the maximum matching algorithm (Liu et al., 1994; Xue, 2003) to perform data labeling with De. It is a greedy search routine that walks through a sentence trying to find the longest string, starting from a given point in the sentence, that matches with an entry in the dictionary. The general process of this algorithm is summarized in Alg. 1. In our experiments, we intuitively set the context size k = 4. 3.4 Build PU Learning Classifier In this work, we use a neural-network-based architecture to implement the classifier f, and this architecture is shared by different entity types. Word Representation. Context-independent word representation consists of three part of features, i.e., the character sequence representation ec(w), the word embedding ew(w), and some human designed features on the word-face eh(w). For the character-level representation ec(w) of w, we use the one-layer convolution network model (Kim, 2014) on its character sequence {c1, c2, · · · , cm} ∈Vc, where Vc is the character vocabulary. Each character c is represented using v(c) = Wc(c), where Wc denotes a character embedding lookup table. The one-layer convolution network is then applied to {v(c1), v(c2), · · · , v(cm)} to obtain ec(w). For the word-level representation ew(w) of w, we introduce an unique dense vector for w, which is initialized with Stanford’s GloVe word embeddings1 (Pennington et al., 2014) and finetuned during model training. For the human designed features eh(w) of w, we introduce a set of binary feature indicators. These indicators are designed on options proposed by Collobert et al. (2011): allCaps, upperInitial, lowercase, mixedCaps, noinfo. If any feature is activated, its corresponding indicator is set to 1, otherwise 0. This way, it can keep the capitalization information erased during lookup of the word embedding. The final word presentation independent to its context e(w) ∈ Rkw of w, is obtained by concatenating these three part of features: e(w) = [ec(w) ⊕ew(w) ⊕eh(w)], (8) where ⊕denotes the concatenation operation. Based on this representation, we apply a bidirectional LSTM (BiLSTM) network (Huang et al., 2015), taking e(wt), wt ∈ s as step input, to model context information of wt given the sentence s. Hidden states of the forward and backward LSTMs at the t step are concatenated: e(wt|s) = [−→ h t ⊕←− h t], (9) to form the representation of wt given s. Loss Definition. Given the word representation, e(w|s), of w conditional on s, its probability to be predicted as positive class is modeled by: f(w|s) = σ(wT p e(w|s) + b), (10) where σ denotes the sigmoid function, wp is a trainable parameter vector and b is the bias term. The prediction risk on this word given label y is defined by: ℓ(f(w|s), y) = |y −f(w|s)|. (11) Note that ℓ(f(w|s), y) ∈[0, 1) is upper bounded. The empirical training loss is defined by: ˆRℓ(f) = πp ˆR+ p (f) + max n 0, ˆR− u (f) −πp ˆR− p (f) o , (12) 1 http://nlp.stanford.edu/projects/glove/ 2413 where ˆR+ p (f) = 1 |D+| X w|s∈D+ ℓ(f(w|s), 1), ˆR− p (f) = 1 −ˆR+ p (f), ˆR− u (f) = 1 |Du| X w|s∈Du ℓ(f(w|s), 0), and πp is the ratio of entity words within Du. In addition, during our experiments, we find out that due to the class imbalance problem (πp is very small), f inclines to predict all instances as the negative class, achieving a high value of accuracy while a small value of F1 on the positive class. This is unacceptable for NER. Therefore, we introduce a class weight γ for the positive class and accordingly redefine the training loss as: ˆRℓ(f) = γ · πp ˆR+ p (f) + max n 0, ˆR− u (f) −πp ˆR− p (f) o . (13) Label Inference. Once the PU classifier has been trained, we use it to perform label prediction. However, since we build a distinct classifier for each entity type, a word may be predicted as positive class by multiple classifiers. To address the conflict, we choose the type with the highest prediction probability (evaluated by f(w|s)). Predictions of classifiers of the other types are reset to 0. At inference time, we first solve the type conflict using the above method. After that, consecutive words being predicted as positive class by the classifier of the same type are treated as an entity. Specifically, for sequence s = {w1, w2, w3, w4, w5}, if its predicted labels by the classifier of a given type are L = {1, 1, 0, 0, 1}, then we treat {w1, w2} and {w5} as two entities of the type. 3.5 Adapted PU Learning for NER In PU learning, we use the empirical risk on labeled positive data, 1 np Pnp i=1 ℓ(f(xp i ), 1), to estimate the expectation risk of positive data. This requires that the positive examples xp i draw identically independent from the distribution P(X|Y = 1). The requirement is usually hard to satisfy, using a simple dictionary to perform data labeling. To alleviate this problem, we propose an adapted method, motivated by the AdaSampling (Yang et al., 2017) algorithm. The key idea of the proposed method is to adaptively enrich the named entity dictionary. Specifically, we first train a PU learning classifier f and use it to label the unlabeled dataset. Based on the predicted label, it extracts all of the predicted entities. For a predicted entity, if it occurs over k times and all of its occurrences within the unlabeled dataset are predicted as entities, we will add it into the entity dictionary in the next iteration. This process iterates several times until the dictionary does not change. 4 Experiments In this section, we empirically study: • the general performance of our proposed method using simple dictionaries; • the influence of the unlabeled data size; • the influence of dictionary quality, such as size, data labeling precision and recall; • and the influence of the estimation of πp. 4.1 Compared Methods There are five indispensable baselines with which our proposed Adapted PU learning (AdaPU) algorithm should compare. The first one is the dictionary matching method, which we call Matching. It directly uses the constructed named entity dictionary to label the testing set as illustrated in Alg. 1. The second one is the supervised method that uses the same architecture as f but trains on fine-grained annotations (fully labeled Du and D+). In addition, it applies the BIOES label assignment mechanism for model training. We call this baseline BiLSTM. The third one is the uPU algorithm, which uses cross entropy loss to implement ℓ. The fourth one is the bounded uPU (buPU) algorithm, which implement ell with mean absolute error. Compared with AdaPU, it does not apply the non-negative constraint and does not perform dictionary adaptation. The last one is the bounded non-negative PU learning (bnPU) algorithm, which does not perform dictionary adaptation compared with AdaPU. Additionally, we compared our method with several representative supervised methods that have achieved state-of-the-art performance on NER. These methods include: Stanford NER (MEMM) (McCallum et al., 2000) a maximumentropy-markov-model-based method; Stanford NER (CRF) (Finkel et al., 2005) a conditionalrandom-field-based method; and BiLSTM+CRF 2414 Dataset Type # of l.w. Precision Recall CoNLL (en) PER 2,507 89.26 17.38 LOC 4,384 85.07 50.03 ORG 3,198 86.17 29.45 MISC 1,464 92.13 30.59 CoNLL (sp) PER 574 90.24 37.84 LOC 272 84.93 16.39 ORG 702 96.87 27.19 MISC 157 68.15 11.94 MUC PER 788 74.56 28.50 LOC 511 89.43 43.33 ORG 1,257 97.74 30.38 Twitter PER 1,842 79.26 26.03 LOC 1,109 90.96 34.15 ORG 398 83.77 20.58 Table 1: Data labeling results using the dictionary: the number of labeled words (# of l.w.), the word-level precision ( # of true labeled words # of total labeled words) and recall. (Huang et al., 2015) a neural-network-based method as the BiLSTM baseline, but additionally introducing a CRF layer. 4.2 Datasets CoNLL (en). CoNLL2003 NER Shared Task Dataset in English (Tjong Kim Sang and De Meulder, 2003) collected from Reuters News. It is annotated by four types: PER, LOC, ORG, and MISC. We used the official split training set for model training, and testb for testing in our experiments, which contains 203K and 46K tokens, respectively. In addition, there are about 456k additional unlabeled tokens. CoNLL (sp). CoNLL2002 Spanish NER Shared Task Dataset (Sang and Erik, 2002) collected from Spanish EFE News Agency. It is also annotated by PER, LOC, ORG, and MISC types. The training and test data sets contain 273k and 53k lines, respectively. MUC. Message Understanding Conference 7 released by Chinchor (1998) for NER. It has about 190K tokens in the training set and 64K tokens in the testing set. For the sake of homogeneity, we perform entity detection on PER, LOC, and ORG in this study. Twitter. Twitter is a dataset collected from Twitter and released by Zhang et al. (2018). It contains 4,000 tweets for training and 3,257 tweets for testing. Every tweet contains both textual information and visual information. In this work, we only used the textual information to perform NER and we also only performed entity detection Dataset PER LOC ORG MISC CoNLL (en) .055/.053 .041/.038 .049/.045 .023/.020 CoNLL (sp) .019/.018 .019/.017 .030/.027 −−−− MUC-7 .022/.019 .025/.023 .037/.034 −−−− Twitter .058/.055 .046/.044 .021/.018 −−−− Table 2: True/Estimated value of πp. on PER, LOC, and ORG. For the proposed method and the PU-learningbased baselines, we used the training set of each dataset as D. Note that we did not use label information of each training set for training these models. 4.3 Build Named Entity Dictionary For CoNLL (en), MUC, and Twitter datasets, we collected the first 2,000 popular English names in England and Wales in 2015 from ONS2 to construct the PER dictionary. For LOC, we collected names of countries and their top two popular cities3 to construct the dictionary. While for MISC, we turned country names into the adjective forms, for example, England →English, and China →Chinese, and used the resultant forms to construct the dictionary. For ORG, we collected names of popular organizations and their corresponding abbreviations from Wikipedia 4 to construct the dictionary. We also added names of some international companies5, such as Microsoft, Google, and Facebook, into the dictionary. In addition, we added some common words occurring in organization names such as “Conference”, “Cooperation”, “Commission”, and so on, into the dictionary. For CoNLL (sp), we used DBpedia query editor6 to select the most common 2000 names of the people who was born in Spain to construct the PER dictionary. We further used Google translator to translate the English LOC, ORG, MISC dictionary into Spanish. The resultant named entity dictionaries contain 2,000 person names, 748 location names, 353 organization names, and 104 MISC entities. Table 1 lists some statistic information of the data labeling results with these dictionaries using Alg. 2http://www.ons.gov.uk/ons/index.html 3https://en.wikipedia.org/wiki/List of countries by national capital largest and second-largest cities 4https://en.wikipedia.org/wiki/List of intergovernmental organizations 5https://en.wikipedia.org/wiki/List of multinational corporations 6http://dbpedia.org 2415 Dataset Type MEMM CRF BiLSTM BiLSTM+CRF Matching uPU buPU bnPU AdaPU CoNLL (en) PER 91.61 93.12 94.21 95.71 6.70 74.22 85.01 87.21 90.17 LOC 89.72 91.15 91.76 93.02 67.16 69.88 81.27 83.37 85.62 ORG 80.60 81.91 83.21 88.45 46.65 73.64 74.72 75.29 76.03 MISC 77.45 79.35 76.00 79.86 53.98 68.90 68.90 66.88 69.30 Overall 86.13 87.94 88.30 90.01 44.90 72.32 79.20 80.74 82.94 CoNLL (sp) PER 86.18 86.77 88.93 90.41 32.40 82.28 83.76 84.30 85.10 LOC 78.48 80.30 75.43 80.55 28.53 70.44 72.55 73.68 75.23 ORG 79.23 80.83 79.27 83.26 55.76 69.82 71.22 69.82 72.28 Overall 81.14 82.63 80.28 84.74 42.23 73.84 74.50 74.43 75.85 MUC PER 86.32 87.50 85.71 84.55 27.84 77.98 84.94 84.21 85.26 LOC 81.70 83.83 79.48 83.43 62.82 64.56 72.62 75.61 77.35 ORG 68.48 72.33 66.17 67.66 51.60 45.30 58.39 58.75 60.15 Overall 74.66 76.47 73.12 75.08 50.12 63.87 69.89 70.06 71.60 Twitter PER 73.85 80.86 80.61 80.77 41.33 67.30 72.72 72.68 74.66 LOC 69.35 75.39 73.52 72.56 49.74 59.28 61.41 63.44 65.18 ORG 41.81 47.77 41.39 41.33 32.38 31.51 36.78 35.77 36.62 Overall 61.48 67.15 65.60 65.32 37.90 53.63 57.16 57.54 59.36 Table 3: Model performance by F1 on the testing set of each dataset. The first group of models are all fullysupervised, which use manual fine-grained annotations. while the second group of models use only named entity dictionaries to perform the NER task. 0 20% 40% 60% 80% 100% 300% Propotion 80 82 84 86 88 90 92 94 96 F1 83.51 +232 86.87 +88 85.18 +68 89.86 +62 90.17 +33 90.65 BiLSTM (a) PER 0 20% 40% 60% 80% 100% 300% Propotion 70 75 80 85 90 95 F1 79.6 +201 81.24 +31 83.74 +42 83.97 +19 85.62 +23 89.86 BiLSTM (b) LOC 0 20% 40% 60% 80% 100% 300% Propotion 60 65 70 75 80 85 F1 64.9 +137 71.94 +81 75.27 +6 77.57 +10 76.03 +3 76.51 BiLSTM (c) ORG Figure 2: F1 of AdaPU on the testing set of CoNLL (en) using different portion of the training data set for model training. The red dot line denotes performance of BiLSTM. ’+k’ means that it labels k more unique words on the additional 20% (e.g., 40%-20%) of training data. 1. From the table, we can see that the precision of the data labeling is acceptable but the recall is quite poor. This is expectable and is a typical problem of the method using only dictionaries to perform NER. 4.4 Estimate πp Before disscussing the estimation of πp defined in Eq. (12), let us first look at some statistic information of the four studied datasets. Table 2 lists the true value of πp = (# of entity words)/(# of words of the training set) for different entity types over dataset. From the table, we can see that the variation of πp cross different datasets is quite small. This motivates us to use the value of πp obtained from an existing labeled dataset as an initialization. The labeled dataset may be from other domains or be out-of-date. In this work, we initially set πp = 0.04, 0.04, 0.05, 0.03 for PER, LOC, ORG, and MISC, respectively. Starting from this value, we trained the proposed model and used it to perform prediction on the unlabeled dataset. Based on the predicted results, we re-estimate the value of πp. The resulted values are listed in table 2 and were used throughout our experiments without further illustration. 4.5 Results Following the protocol of most previous works, we apply the entity-level (exact entity match) F1 to evaluate model performance. General Performance. Table 3 shows model performance by entity type and the overall performance on the four tested datasets. From the table, we can observe: 1) The performance of the Matching model is quite poor compared to other models. We found out that it mainly resulted from low recall values. This accords with our 2416 discussion in §1 and shows its inapplicability using such simple dictionaries. 2) Those PUlearning-based methods achieve significant improvement over Matching on all datasets. This demonstrates the effectiveness of the PU learning framework for NER in the studied setting. 3) buPU greatly outperforms uPU. This verifies our analysis in §2.3 about the necessity to make ℓupper bounded. 4) bnPU slightly outperforms buPU on most of datasets and entity types. This verifies the effectiveness of the non-negative constraint proposed by Kiryo et al. (2017). 5) The proposed AdaPU model achieves further improvement over bnPU, and it even achieves comparable results with some supervised methods, especially for the PER type. This verifies the effectiveness of our proposed method for enriching the named entity dictionaries. Type Size Precision Recall PER 10,159 (2,000) 89.65 (89.26) 19.08 (17.38) LOC 10,106 (748) 71.77 (85.07) 56.42 (50.03) ORG 10,039 (353) 83.42 (86.17) 28.59 (29.45) Table 4: Statistic information of the extended dictionary v.s. (that of the original dictionary). Model PER LOC ORG Overall Matching 9.10 (6.70) 69.85 (67.16) 45.52 (46.65) 41.40 (39.39) AdaPU 91.14 (90.17) 77.60 (85.62) 76.67 (76.03) 81.87 (82.94) Table 5: F1 of the proposed method using the extend dictionary v.s. (that using the original dictionary) on CoNLL (en) testing set. Influence of Unlabeled Data Size. We further study the influence of the unlabeled data size to our proposed method. To perform the study, we used 20%, 40%, 60%, 80%, 100%, and 300% (using additional unlabeled data) of the training data set of CoNLL (en) to train AdaPU, respectively. Figure 2 depicts the results of this study on PER, LOC, and ORG. From the figure, we can see that increasing the size of training data will, in general, improve the performance of AdaPU, but the improvements are diminishing. Our explanation of this phenomenon is that when the data size exceeds a threshold, the number of unique patterns becomes an sublinear function of the data size. This was verified by the observation from the figure, for example, on PER, it labeled 232 unique words on 20% of training data, while it only labeled 88 more unique words πp PER LOC ORG MISC Overall True 90.21 85.06 77.17 69.85 83.13 Estimated 90.17 85.62 76.03 69.30 82.94 Table 6: F1 of the proposed method on CoNLL (en) when using True/Estimated value of πp. after introducing additional 20% of training data. Influence of Dictionary. We then study the influence of the dictionary on our proposed model. To this end, we extended the dictionary with DBpedia using the same protocol proposed by Chiu and Nichols (2016). Statistic information of the resultant dictionary is listed in table 4, and model performance using this dictionary is listed in table 5. A noteworthy observation of the results is that, on LOC, the performance should decrease a lot when using the extended dictionary. We turn to table 4 for the explanation. We can see from the table that, on LOC, the data labeling precision dropped about 13 points (85.07 →71.77) using the extend dictionary. This means that it introduced more false-positive examples into the PU learning and made the empirical risk estimation bias more to the expectation when using the extended dictionary. Influence of πp Value. Table 6 lists the performance of AdaPU when using the true or estimated value of πp as listed in table 2. From the table, we can see that the proposed model using the estimated πp only slightly underperforms that using the true value of πp. This shows the robustness of the proposed model to a small variation of πp and verifies the effectiveness of the πp estimation method. 5 Related Work Positive-unlabeled (PU) learning (Li and Liu, 2005) aims to train a classifier using only labeled positive examples and a set of unlabeled data, which contains both positive and negative examples. Recently, PU learning has been used in many applications, e.g., text classification (Li and Liu, 2003), matrix completion (Hsieh et al., 2015), and sequential data (Nguyen et al., 2011). The main difference between PU learning and semisupervised learning is that, in semi-supervised learning, there is labeled data from all classes, while in PU learning, labeled data only contains examples of a single class . 2417 AdaSampling (Yang et al., 2017) is a selftraining-based approach designed for PU learning, which utilizes predictions of the model to iteratively update training data. Generally speaking, it initially treats all unlabeled instances as negative examples. Then, based on the model trained in the last iteration, it generates the probability p(y = 0|xu i ) of an unlabeled example xu i to be a negative one. This value, in turn, determines the probability of xu i to be selected as the negative examples for model training in next iteration. This process iterates for an acceptable result. 6 Conclusion In this work, we introduce a novel PU learning algorithm to perform the NER task using only unlabeled data and named entity dictionaries. We prove that this algorithm can unbiasedly and consistently estimate the task loss as if there is fully labeled data. And we argue that it can greatly reduce the requirement on sizes of the dictionaries. Extensive experimental studies on four NER datasets validate its effectiveness. Acknowledgements The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by China National Key R&D Program (No. 2018YFB1005104, 2018YFC0831105, 2017YFB1002104, ), National Natural Science Foundation of China (No. 61751201, 61532011), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01), STCSM (No.16JC1420401,17JC1420200), ZJLab. References Nachman Aronszajn. 1950. Theory of reproducing kernels. Transactions of the American mathematical society, 68(3):337–404. Bogdan Babych and Anthony Hartley. 2003. Improving machine translation quality with automatic named entity recognition. In Proceedings of the 7th International EAMT workshop on MT and other Language Technology Tools, pages 1–8. Association for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Nancy Chinchor. 1998. Overview of muc-7. In Seventh Message Understanding Conference (MUC7): Proceedings of a Conference Held in Fairfax, Virginia, April 29-May 1, 1998. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association of Computational Linguistics, 4(1):357–370. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Felipe Cucker and Steve Smale. 2002. On the mathematical foundations of learning. Bulletin of the American mathematical society, 39(1):1–49. Louise Del˙eger, Robert Bossy, Estelle Chaix, Mouhamadou Ba, Arnaud Ferr˙e, Philippe Bessieres, and Claire N˙edellec. 2016. Overview of the bacteria biotope task at bionlp shared task 2016. In Proceedings of the 4th BioNLP Shared Task Workshop, pages 12–22. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 363–370. Association for Computational Linguistics. Martin Gerner, Goran Nenadic, and Casey M Bergman. 2010. Linnaeus: a species name identification system for biomedical literature. BMC bioinformatics, 11(1):85. Mourad Gridach. 2017. Character-level neural network for biomedical named entity recognition. Journal of biomedical informatics, 70:85–91. Cho-Jui Hsieh, Nagarajan Natarajan, and Inderjit S Dhillon. 2015. Pu learning for matrix completion. In ICML, pages 2445–2453. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Yoon Kim. 2014. Convolutional neural networks for sentence classification. Empirical Methods in Natural Language Processing. Ryuichi Kiryo, Gang Niu, Marthinus C du Plessis, and Masashi Sugiyama. 2017. Positive-unlabeled learning with non-negative risk estimator. In Advances in Neural Information Processing Systems, pages 1675–1685. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270. 2418 Xiao-Li Li and Bing Liu. 2005. Learning from positive and unlabeled examples with different data distributions. In European Conference on Machine Learning, pages 218–229. Springer. Xiaoli Li and Bing Liu. 2003. Learning to classify texts using positive and unlabeled data. In IJCAI, volume 3, pages 587–592. Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. AAAI Conference on Artificial Intelligence. Shengyu Liu, Buzhou Tang, Qingcai Chen, and Xiaolong Wang. 2015. Effects of semantic features on machine learning-based drug name recognition systems: word embeddings vs. manually constructed dictionaries. Information, 6(4):848– 865. Yuan Liu, Qiang Tan, and Kun Xu Shen. 1994. The word segmentation rules and automatic word segmentation methods for chinese information processing. Qing Hua University Press and Guang Xi, page 36. Andrew McCallum, Dayne Freitag, and Fernando CN Pereira. 2000. Maximum entropy markov models for information extraction and segmentation. In Icml, volume 17, pages 591–598. Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 188–191. Association for Computational Linguistics. David Nadeau, Peter D Turney, and Stan Matwin. 2006. Unsupervised named-entity recognition: Generating gazetteers and resolving ambiguity. In Conference of the Canadian Society for Computational Studies of Intelligence, pages 266–277. Springer. Minh Nhut Nguyen, Xiao-Li Li, and See-Kiong Ng. 2011. Positive unlabeled learning for time series classification. In Twenty-Second International Joint Conference on Artificial Intelligence. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Marthinus C du Plessis, Gang Niu, and Masashi Sugiyama. 2014. Analysis of learning from positive and unlabeled data. In Advances in neural information processing systems, pages 703–711. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84. Lorenzo Rosasco, Ernesto De Vito, Andrea Caponnetto, Michele Piana, and Alessandro Verri. 2004. Are loss functions all the same? Neural Computation, 16(5):1063–1076. Tjong Kim Sang and F Erik. 2002. Introduction to the conll-2002 shared task: language-independent named entity recognition. Computer Science, pages 142–147. Burr Settles. 2004. Biomedical named entity recognition using conditional random fields and rich feature sets. In Proceedings of the international joint workshop on natural language processing in biomedicine and its applications, pages 104–107. Association for Computational Linguistics. Wei Shen, Jianyong Wang, Ping Luo, and Min Wang. 2012. Linden: linking named entities with knowledge base via semantic knowledge. In Proceedings of the 21st international conference on World Wide Web, pages 449–458. ACM. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142–147. Association for Computational Linguistics. Nianwen Xue. 2003. Chinese word segmentation as character tagging. International Journal of Computational Linguistics & Chinese Language Processing, Volume 8, Number 1, February 2003: Special Issue on Word Formation and Chinese Language Processing, 8(1):29–48. Pengyi Yang, Wei Liu, and Jean Yang. 2017. Positive unlabeled learning via wrapper-based adaptive sampling. In IJCAI. Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly supervised ner with partial annotation learning and reinforcement learning. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2159–2169. Qi Zhang, Jinlan Fu, Xiaoyu Liu, and Xuanjing Huang. 2018. Adaptive co-attention network for named entity recognition in tweets. In AAAI. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), 1554-1564. GuoDong Zhou and Jian Su. 2002. Named entity recognition using an hmm-based chunk tagger. In proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 473–480. Association for Computational Linguistics. 2419 Jianhan Zhu, Victoria Uren, and Enrico Motta. 2005. Espotter: Adaptive named entity recognition for web browsing. In Biennial Conference on Professional Knowledge Management/Wissensmanagement, pages 518–529. Springer. A Proof of Theorem 1 Proof. Let denote ˆRs ℓ(f) the empirical estimation of Rℓ(f) with k randomly labeled examples. Since ℓis bounded, C0, M, and B are finite. According to the Lemma in (Rosasco et al., 2004) we have: P{S ∈D| sup f∈HR |Rℓ(f) −ˆRs ℓ(f)| ≤ϵ} ≥1 −2N( ϵ 4LM )e−kϵ2 8B2 . (14) Then, the empirical estimation error of Rℓ(f) − ˆRℓ(f) in PU learning can be written as: Rℓ(f) −ˆRℓ(f) = EXℓ(f(x), 0) −1 nu nu X i=1 ℓ((f(xu i ), 0) ! + πp EX|Y=1ℓ(f(x), 1) −1 np np X i=1 ℓ(f(xp i ), 1) ! −πp EX|Y=1ℓ(f(x), 0) −1 np np X i=1 ℓ(f(xp i ), 0) ! (15) Thus, |Rℓ(f) −ˆRℓ(f)| ≤ EXℓ(f(x), 0) −1 nu nu X i=1 ℓ((f(xu i ), 0) + πp EX|Y=1ℓ(f(x), 1) −1 np np X i=1 ℓ(f(xp i ), 1) + πp EX|Y=1ℓ(f(x), 0) −1 np np X i=1 ℓ(f(xp i ), 0) (16) Let Iℓ(X, 0) denote EXℓ(f(x), 0) −1 nu nu X i=1 ℓ((f(xu i ), 0). According to Eq. 14, we have: P{S ∈D| sup f∈HR |Iℓ(X, 0)| ≤ϵ} ≥1 −2N( ϵ 4LM )e−nuϵ2 8B2 (17) Similarly, let Iℓ(X|Y = 1, 1) denote EX|Y=1ℓ(f(x), 1) −1 np np X i=1 ℓ(f(xp i ), 1), and Iℓ(X|Y = 1, 0) denote EX|Y=1ℓ(f(x), 0) −1 np np X i=1 ℓ(f(xp i ), 0), we have: P{S ∈D| sup f∈HR |Iℓ(X|Y = 1, 1)| ≤ϵ} ≥1 −2N( ϵ 4LM )e−npϵ2 8B2 , (18) and P{S ∈D| sup f∈HR |Iℓ(X|Y = 1, 0)| ≤ϵ} ≥1 −2N( ϵ 4LM )e−npϵ2 8B2 , (19) Therefore, P{S ∈D| sup f∈HR |Rℓ(f) −ˆRℓ(f)| ≤(1 + 2πp)ϵ} ≥min(1 −2N( ϵ 4LM )e−npϵ2 8B2 , 1 −2N( ϵ 4LM )e−nuϵ2 8B2 ) = 1 −2N( ϵ 4LM )e−min(np,nu)ϵ2 8B2 (20) The theorem follows replacing ϵ with 1 1+2πp ϵ.
2019
231
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2420–2430 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2420 Multi-Task Semantic Dependency Parsing with Policy Gradient for Learning Easy-First Strategies Shuhei Kurita Center for Advanced Intelligence Project RIKEN Tokyo, Japan [email protected] Anders Søgaard Department of Computer Science University of Copenhagen Copenhagen, Denmark [email protected] Abstract In Semantic Dependency Parsing (SDP), semantic relations form directed acyclic graphs, rather than trees. We propose a new iterative predicate selection (IPS) algorithm for SDP. Our IPS algorithm combines the graph-based and transition-based parsing approaches in order to handle multiple semantic head words. We train the IPS model using a combination of multi-task learning and task-specific policy gradient training. Trained this way, IPS achieves a new state of the art on the SemEval 2015 Task 18 datasets. Furthermore, we observe that policy gradient training learns an easy-first strategy. 1 Introduction Dependency parsers assign syntactic structures to sentences in the form of trees. Semantic dependency parsing (SDP), first introduced in the SemEval 2014 shared task (Oepen et al., 2014), in contrast, is the task of assigning semantic structures in the form of directed acyclic graphs to sentences. SDP graphs consist of binary semantic relations, connecting semantic predicates and their arguments. A notable feature of SDP is that words can be the semantic arguments of multiple predicates. For example, in the English sentence: “The man went back and spoke to the desk clerk” – the word “man” is the subject of the two predicates “went back” and “spoke”. SDP formalisms typically express this by two directed arcs, from the two predicates to the argument. This yields a directed acyclic graph that expresses various relations among words. However, the fact that SDP structures are directed acyclic graphs means that we cannot apply standard dependency parsing algorithms to SDP. Standard dependency parsing algorithms are often said to come in two flavors: transition-based The man went back and spoke to the desk clerk. a) DM The man went back and spoke to the desk clerk. b) PAS The man went back and spoke to the desk clerk. c) PSD ROOT ROOT ROOT ROOT BV ARG1 ARG1 AND_C LOC ARG1 ARG2 BV COMPOUND DET_ARG1 VERB_ARG1 VERB_ARG1 ADJ_ARG1 COORD_ARG1 COORD_ARG2 PREP_ARG1 PREP_ARG2 DET_ARG1 NOUN_ARG1 ACT DIR3 CONJ.MEMBER RSTR ADDR CONJ.MEMBER Figure 1: Semantic dependency parsing arcs of DM, PAS and PSD formalisms. parsers score transitions between states, and gradually build up dependency graphs on the side. Graph-based parsers, in contrast, score all candidate edges directly and apply tree decoding algorithms for the resulting score table. The two types of parsing algorithms have different advantages (McDonald and Nivre, 2007), with transitionbased parsers often having more problems with error propagation and, as a result, with long-distance dependencies. This paper presents a compromise between transition-based and graph-based parsing, called iterative predicate selection (IPS) – inspired by head selection algorithms for dependency parsing (Zhang et al., 2017) – and show that error propagation, for this algorithm, can be reduced by a combination of multi-task and reinforcement learning. Multi-task learning is motivated by the fact that there are several linguistic formalisms for SDP. Fig. 1 shows the three formalisms used in the shared task. The DELPH-IN MRS (DM) formalism derives from DeepBank (Flickinger et al., 2012) and minimal recursion semantics (Copestake et al., 2005). Predicate-Argument Structure (PAS) is a formalism based on the Enju HPSG parser (Miyao et al., 2004) and is generally considered slightly more syntactic of nature than the 2421 other formalisms. Prague Semantic Dependencies (PSD) are extracted from the Czech-English Dependency Treebank (Hajiˇc et al., 2012). There are several overlaps between these linguistic formalisms, and we show below that parsers, using multi-task learning strategies, can take advantage of these overlaps or synergies during training. Specifically, we follow Peng et al. (2017) in using multi-task learning to learn representations of parser states that generalize better, but we go beyond their work, using a new parsing algorithm and showing that we can subsequently use reinforcement learning to prevent error propagation and tailor these representations to specific linguistic formalisms. Contributions In this paper, (i) we propose a new parsing algorithm for semantic dependency parsing (SDP) that combines transition-based and graph-based approaches; (ii) we show that multitask learning of state representations for this parsing algorithm is superior to single-task training; (iii) we improve this model by task-specific policy gradient fine-tuning; (iv) we achieve a new state of the art result across three linguistic formalisms; finally, (v) we show that policy gradient fine-tuning learns an easy-first strategy, which reduces error propagation. 2 Related Work There are generally two kinds of dependency parsing algorithms, namely transition-based parsing algorithms (McDonald and Nivre, 2007; Kiperwasser and Goldberg, 2016; Ballesteros et al., 2015) and graph-based ones (McDonald and Pereira, 2006; Zhang and Clark, 2008; Galley and Manning, 2009; Zhang et al., 2017). In graphbased parsing, a model is trained to score all possible dependency arcs between words, and decoding algorithms are subsequently applied to find the most likely dependency graph. The Eisner algorithm (Eisner, 1996) and the Chu-Liu-Edmonds algorithm are often used for finding the most likely dependency trees, whereas the AD3 algorithm (Martins et al., 2011) is used for finding SDP graphs that form DAGs in Peng et al. (2017) and Peng et al. (2018). During training, the loss is computed after decoding, leading the models to reflect a structured loss. The advantage of graphbased algorithms is that there is no real error propagation to the extent the decoding algorithms are global inference algorithm, but this also means that reinforcement learning is not obviously applicable to graph-based parsing. In transition-based parsing, the model is typically taught to follow a gold transition path to obtain a perfect dependency graph during training. This training paradigm has the limitation that the model only ever gets to see states that are on gold transition paths, and error propagation is therefore likely to happen when the parser predicts wrong transitions leading to unseen states (McDonald and Nivre, 2007; Goldberg and Nivre, 2013). There have been several attempts to train transition-based parsers with reinforcement learning: Zhang and Chan (2009) applied SARSA (Baird III, 1999) to an Arc-Standard model, using SARSA updates to fine-tune a model that was pre-trained using a feed-forward neural network. Fried and Klein (2018), more recently, presented experiments with applying policy gradient training to several constituency parsers, including the RNNG transition-based parser (Dyer et al., 2016). In their experiments, however, the models trained with policy gradient did not always perform better than the models trained with supervised learning. We hypothesize this is due to credit assignment being difficult in transition-based parsing. Iterative refinement approaches have been proposed in the context of sentence generation (Lee et al., 2018). Our proposed model explores multiple transition paths at once and avoids making risky decisions in the initial transitions, in part inspired by such iterative refinement techniques. We also pre-train our model with supervised learning to avoid sampling from irrelevant states at the early stages of policy gradient training. Several models have been presented for DAG parsing (Sagae and Tsujii, 2008; Ribeyre et al., 2014; Tokg¨oz and G¨ulsen, 2015; Hershcovich et al., 2017). Wang et al. (2018) proposed a similar transition-based parsing model for SDP; they modified the possible transitions of the ArcEager algorithm (Nivre and Scholz, 2004b) to create multi-headed graphs. We are, to the best of our knowledge, first to explore reinforcement learning for DAG parsing. 3 Model 3.1 Iterative Predicate Selection We propose a new semantic dependency parsing algorithm based on the head-selection algorithm for syntactic dependency parsing (Zhang et al., 2422 The man went back and spoke to the desk clerk. The man went back and spoke to the desk clerk. The man went back and spoke to the desk clerk. The man went back and spoke to the desk clerk. The man went back and spoke to the desk clerk. The man went back and spoke to the desk clerk. transitions Initial state Final state t0 w1, · · · , t0 wn t1 w1, · · · , t1 wn t2 w1, · · ·, t2 wn transitions transitions τ = 0 τ = 1 τ = 2 τ = 3 Figure 2: Construction of semantic dependency arcs (DM) in the IPS parsing algorithm. Parsing begins from the initial state and proceeds to the final state following one of several paths. In the left path, the model resolves adjacent arcs first. In contrast, in the right path, distant arcs that rely on the global structure are resolved first. 2017). Head selection iterates over sentences, fixing the head of a word w in each iteration, ignoring w in future iterations. This is possible for dependency parsing because each word has a unique head word, including the root of the sentence, which is attached to an artificial root symbol. However, in SDP, words may attach to multiple head-words or semantic predicates whereas other words may not attach to any semantic predicates. Thus, we propose an iterative predicate selection (IPS) parsing algorithm, as a generalization of head-selection in SDP. The proposed algorithm is formalized as follows. First, we define transition operations for all words in a sentence. For the i-th word wi in a sentence, the model selects one transition tτ i from the set of possible transitions T τ i for each transition time step τ. Generally, the possible transitions Ti for the i-th word are expressed as follows: {NULL, ARCi,ROOT, ARCi,1, · · · , ARCi,n} where ARCi,j is a transition to create an arc from the j-th word to the i-th word, encoding that the semantic predicate wj takes wi as an semantic argument. NULL is a special transition that does not create an arc. The set of possible transitions T τ i for the i-th word at time step τ is a subset of possible transitions Ti that satisfy two constraints: (i) no arcs can be reflexive, i.e., wi cannot be an argument of itself, and (ii) the new arc must not be a member of the set of arcs Aτ comprising the partial parse graph yτ constructed at time step τ. Therefore, we obtain: T τ i = Ti/(ARCi,i ∪Aτ). The model then creates semantic dependency arcs by iterating over the sentence as follows:1 1This algorithm can introduce circles. However, circles 1 For each word wi, select a head arc from T τ i . 2 Update the partial semantic dependency graph. 3 If all words select NULL, the parser halts. Otherwise, go to 1. Fig. 2 shows the transitions of the IPS algorithm during the DM parsing of the sentence “The man went back and spoke to the desk clerk.” In this case, there are several paths from the initial state to the final parsing state, depending on the orders of creating the arcs. This is known as the nondeterministic oracle problem (Goldberg and Nivre, 2013). In IPS parsing, some arcs are easy to predict; others are very hard to predict. Long-distance arcs are generally difficult to predict, but they are very important for down-stream applications, including reordering for machine translation (Xu et al., 2009). Since long-distance arcs are harder to predict, and transition-based parsers are prone to error propagation, several easy-first strategies have been introduced, both in supervised (Goldberg and Elhadad, 2010; Ma et al., 2013) and unsupervised dependency parsing (Spitkovsky et al., 2011), to prefer some paths over others in the face of the non-deterministic oracle problem. Easy-first principles have also proven effective with sequence taggers (Tsuruoka and Tsujii, 2005; Martins and Kreutzer, 2017). In this paper, we take an arguably more principled approach, learning a strategy for choosing transition paths over others using reinforcement learning. We observe, however, that the learned strategies exhibit a clear easy-first preference. were extremely rare in our experiments, and can be avoided by simple heuristics during decoding. We discuss this issue in the Supplementary Material, §A.1. 2423 LSTM LSTM LSTM LSTM MLP MLP MLP The U.S. contends that the rules ... The U.S. contends that ROOT The U.S. contends that the ... pi(tj) LSTM LSTM LSTM hi hj gij fij hi hj+3 NULL ROOT MLP . . . NULL U.S. The U.S. contends softmax . . . U.S. that U.S. ROOT Transition probability a) Encoder and MLP b) Encoder of Semantic Dependency ROOT The U.S. contends that the rules ... LSTM LSTM LSTM ... The U.S. contends that (the) wi i wNONE wNONE wROOT wNONE wNONE (contentds) w 2 wNONE wNONE g ij + g i,j+2 gi+1,j−1 LSTM LSTM LSTM LSTM LSTM LSTM . . . for i-th word ... sij sij+3 si0 . . . sij+2 . . . . . . wNONE wNONE wNONE wNONE wNONE wNONE wNONE wNONE wNONE wNONE wNONE wNONE wNONE wNONE wNONE Figure 3: Our network architecture: (a) The encoder of the sentence into the hidden representations hi and hj, and the MLP for the transition probabilities. (b) The encoder of the semantic dependency matrix for the representation of hd ij. The MLP also takes the arc flag representation fij (see text for explanation). 3.2 Neural Model Fig. 3 shows the overall neural network. It consists of an encoder for input sentences and partial SDP graphs, as well as a multi-layered perceptron (MLP) for the semantic head-selection of each word. Sentence encoder We employ bidirectional long short-term memory (BiLSTM) layers for encoding words in sentences. A BiLSTM consists of two LSTMs that reads the sentence forward and backward, and concatenates their output before passing it on. For a sequence of tokens [w1, · · · , wn], the inputs for the encoder are words, POS tags and lemmas.2 They are mapped to the same p-dimensional embedding vectors in a look-up table. Then they are concatenated to form 3p-dimensional vectors and used as the input of BiLSTMs. We denote the mapping function of tokens into 3p-dimensional vectors as u(w∗) for later usages. Finally, we obtain the hidden representations of all words [h(w1), · · · , h(wn)] from the three-layer BiLSTMs. We use three-layer stacked BiLSTMs. We also use special embeddings hNULL for the NULL transition and hROOT for the ROOT of the sentence. Encoder of partial SDP graphs The model updates the partial SDP graph at each time step of the parsing procedure. The SDP graph yτ at time step τ is stored in a semantic dependency matrix Gτ ∈{0, 1}n×(n+1) for a sentence of n words.3 The rows of the matrix G represent arguments and 2In the analysis of our experiments, we include an ablation test, where we leave out lemma information for a more direct comparison with one of our baselines. 3In this subsection, we omit the time step subscription τ of the partial SDP graph from some equations for simplicity. the columns represent head-candidates, including the ROOT of the sentence, which is represented by the first column of the matrix. For each transition for a word, the model fills in one cell in a row, if the transition is not NULL. In the initial state, all cells in G are 0. A cell G[i, j] is updated to 1, when the model predicts that the (i −1)-th word is an argument of the j-th word or ROOT when j = 0. We convert the semantic dependency matrix G into a rank three tensor G′ ∈Rn×(n+1)×p, by replacing elements with embeddings of tokens u(w∗) by g′ ij = ( u(wj−1) (gij = 1) u(wNONE) (gij = 0) (1) where gij ∈G and g′ ij ∈G′. g′ i∗contains the representations of the semantic predicates for the i-th word in the partial SDP graph. We use a single layer Bi-LSTM to encode the semantic predicates g′ i∗of each word; see Fig. 3 (b). Finally, we concatenate the hidden representation of the NULL transition and obtain the partial SDP graph representation Gτ of the time step τ: Gτ = [gτ NULL, gτ ∗,1, · · · , gτ ∗,n+1] (2) We also employ dependency flags that directly encode the semantic dependency matrix and indicate whether the corresponding arcs are already created or not. Flag representations F ′ are also three-rank tensors, consisting of two hidden representations: fARC for gi,j = 1 and fNOARC for gi,j = 0 depending on G. fARC and fNOARC is q-dimensional vectors. Then we concatenate the hidden representation of the NULL transition and 2424 obtain the flag representation F τ: F τ = [fτ NULL, fτ ∗,1, · · · , fτ ∗,n+1] (3) . We do not use BiLSTMs to encode these flags. These flags also reflect the current state of the semantic dependency matrix. Predicate selection model The semantic predicate selection model comprises an MLP with inputs from the encoder of the sentence and the partial semantic dependency graph: the sentence representation H, the SDP representation Gτ, and the dependency flag F τ. They are rank three tensors and concatenated at the third axis. Formally, the score sij of the i-th word and the j-th transition is expressed as follows. sτ ij = MLP([hi, hj, gτ ij, fτ ij]) (4) For the MLP, we use a concatenation of outputs from three different networks: a three-layer MLP, a two-layer MLP and a matrix multiplication with bias terms as follows. MLP(x) = W 3 3 a W 3 2 a(W 3 1 x + b3 1) + b3 2  +W 2 2 a(W 2 1 x + b2 2) + W 1 1 x + b1 1 W ∗ ∗′ are matrices or vectors used in this MLP and W ∗ ∗′ are bias terms. Here, we use this MLP for predicting a scalar score sij; therefore, W 3 3 , W 2 2 , W 1 1 are vectors. The model computes the probability of the transition tj for each word i by applying a softmax function over the candidates of the semantic head words wj. pi(tτ j ) = softmaxj(sτ ij) (5) These transition probabilities pi(tj) of selecting a semantic head word wj, are defined for each word wi in a sentence. For supervised learning, we employ a cross entropy loss Lτ(θ) = − X i,j li log pi(tτ j |Gτ) (6) for the partial SDP graph Gτ at time step τ. Here li is a gold transition label for the i-th word and θ represents all trainable parameters. Note that this supervised training regime, as mentioned above, does not have a principled answer to the non-deterministic oracle problem (Goldberg and Nivre, 2013), and samples transition paths randomly from those consistent with the gold anntoations to create transition labels. Algorithm 1 Policy gradient learning for IPS Algorithm Input: Sentence x with an empty parsing tree y0. Let a time step τ = 0 and finish flags f∗= 0. for 0 ≤τ < the number of maximum iterations do Compute πτ and argmax transitions ˆti = arg max πτ i . if ∀i ; ˆtτ i = NULL then break end if for i-th word in a sentence do if check a finish flag fi = 1 then continue end if if all arcs to word i are correctly created in yτ and ˆti = NULL then Let a flag f = 1 continue end if Sample tτ i from πτ i . Update the parsing tree yτ to yτ+1. Compute a new reward rτ i from yτ, yτ+1 and yg. end for Store a tuple of the state, transitions and rewards for words {yτ, tτ ∗, rτ ∗}. end for Shuffle tuples of {yτ, tτ ∗, rτ ∗} for a time step τ. for a tuple {yτ′, tτ ∗, rτ′ ∗} of time step τ ′ do Compute gradient and update parameters. end for Labeling model We also develop a semantic dependency labeling neural network. This neural network consists of three-layer stacked BiLSTMs and a MLP for predicting a semantic dependency label between words and their predicates. We use a MLP that is a sum of the outputs from a threelayer MLP, a two-layer MLP and a matrix multiplication. Note that the output dimension of this MLP is the number of semantic dependency labels. The input of this MLP is the hidden representations of a word i and its predicates j: [hi, hj] extracted from the stacked BiLSTMs. The score s′ ij(l) of the label l for the arc from predicate j to word i is predicted as follows. s′ ij(l) = MLP′([hi, hj]) (7) We minimize the softmax cross entropy loss using supervised learning. 3.3 Reinforcement Learning Policy gradient Reinforcement learning is a method for learning to iteratively act according to a dynamic environment in order to optimize future rewards. In our context, the agent corresponds to the neural network model predicting the transition probabilities pi(tτ j ) that are used in the parsing algorithm. The environment includes the partial SDP graph yτ, and the rewards rτ are computed 2425 by comparing the predicted parse graph to the gold parse graph yg. We adapt a variation of the policy gradient method (Williams, 1992) for IPS parsing. Our objective function is to maximize the rewards J(θ) = Eπ [rτ i ] (8) and the transition policy for the i-th word is given by the probability of the transitions π ∼pi(tτ j |yτ). The gradient of Eq.8 is given as follows: ∇J(θ) = Eπ  rτ i ∇log pi(tτ j |yτ)  (9) When we compute this gradient, given a policy π, we approximate the expectation Eπ for any transition sequence with a single transition path t that is sampled from policy π: ∇J(θ) ≈ X tτ j ∈t [rτ i ∇log pi(tτ j|yτ)] (10) We summarize our policy gradient learning algorithm for SDP in Algorithm 1. For time step τ, the model samples one transition tτ j selecting the j-th word as a semantic head word of the ith word, from the set of possible transitions Ti, following the transition probability of π. After sampling tτ j , the model updates the SDP graph to yτ+1 and computes the reward rτ i . When NULL becomes the most likely transition for all words, or the time step exceeds the maximum number of time steps allowed, we stop.4 For each time step, we then update the parameters of our model with the gradients computed from the sampled transitions and their rewards.5 Note how the cross entropy loss and the policy gradient loss are similar, if we do not sample from the policy π, and rewards are non-negative. However, these are the important differences between supervised learning and reinforcement learning: (1) Reinforcement learning uses sampling of transitions. This allows our model to explore transition paths that supervised models would never follow. (2) In supervised learning, decisions are independent of the current time step τ, while in reinforcement learning, decisions depend on τ. This means that the θ parameters are updated after the parser finishes parsing the input sentence. (3) Loss 4We limit the number of transitions during training, but not at test time. 5We update the parameters for each time step to reduce memory requirements. Reward Transitions rτ i = 1 (1) The model creates a new correct arc from a semantic predicate to the i-th word. (2) The first time the model chooses the NULL transition after all gold arcs to the i-th word have been created, and no wrong arcs to the i words have not been created. rτ i = −1 (3) The model creates a wrong arc from a semantic predicate candidate to the i-th word. rτ i = 0 (4) All other transitions. Table 1: Rewards in SDP policy gradient. must be non-negative in supervised learning, while rewards can be negative in reinforcement learning. In general, the cross entropy loss is able to optimize for choosing good transitions given a parser configuration, while the policy gradient objective function is able to optimize the entire sequence of transitions drawn according to the current policy. We demonstrate the usefulness of reinforcement learning in our experiments below. Rewards for SDP We also introduce intermediate rewards, given during parsing, at different time steps. The reward rτ i of the i-th word is determined as shown in Table 1. The model gets a positive reward for creating a new correct arc to the i-th word, or if the model for the first time chooses a NULL transition after all arcs to the i-th word are correctly created. The model gets a negative reward when the model creates wrong arcs. When our model chooses NULL transitions for the i-th word before all gold arcs are created, the reward rτ i becomes 0. 3.4 Implementation Details This section includes details of our implementation.6 We use 100-dimensional, pre-trained Glove (Pennington et al., 2014) word vectors. Words or lemmas in the training corpora that do not appear in pre-trained embeddings are associated with randomly initialized vector representations. Embeddings of POS tags and other special symbol are also randomly initialized. We apply Adam as our optimizer. Preliminary experiments show that mini-batching led to a degradation in performance. When we apply policy gradient, we pre-train our model using supervised learning. We then use policy gradient for task-specific fine-tuning of our model. We find that updating parameters of BiLSTM and word embeddings during policy gradient 6The code is available at https://github.com/ shuheikurita/semrl 2426 Name Value Encoder BiLSTM hidden layer size 600 Dependency LSTM hidden layer size 200 The dimensions of embeddings p,q 100, 128 MLPs hidden layer size 4000 Dropout rate in MLPs 0.5 Max transitions during reinforcement learning 10 Table 2: Hyper-parameters in our experiments. Model DM PAS PSD Avg. Peng+ 17 Freda3 90.4 92.7 78.5 88.0 Wang+ 18 Ens. 90.3 91.7 78.6 86.9 Peng+ 18 91.6 78.9 IPS 91.1 92.4 78.6 88.2 IPS +ML 91.2 92.5 78.8 88.3 IPS +RL 91.6‡ 92.8‡ 79.2‡ 88.7‡ IPS +ML +RL 92.0‡ 92.8‡ 79.3‡ 88.8‡ Table 3: Labeled parsing performance on in-domain test data. Avg. is the micro-averaged score of three formalisms. ‡ of the +RL models represents that the scores are statistically significant at p < 10−3 with their nonRL counterparts. makes training quite unstable. Therefore we fix the BiLSTM parameters during policy gradient. In our multi-task learning set-up, we apply multi-task learning of the shared stacked BiLSTMs (Søgaard and Goldberg, 2016; Hashimoto et al., 2017) in supervised learning. We use task-specific MLPs for the three different linguistic formalisms: DM, PAS and PSD. We train the shared BiLSTM using multi-task learning beforehand, and then we finetune the task-specific MLPs with policy gradient. We summarize the rest of our hyper-parameters in Table 2. 4 Experiments We use the SemEval 2015 Task18 (Oepen et al., 2015) SDP dataset for evaluating our model. The training corpus contains 33,964 sentences from the WSJ corpus; the development and in-domain test were taken from the same corpus and consist of 1,692 and 1,410 sentences, respectively. The outof-domain test set of 1,849 sentences is drawn from Brown corpus. All sentences are annotated with three semantic formalisms: DM, PAS and PSD. We use the standard splits of the datasets (Almeida and Martins, 2015; Du et al., 2015). Following standard evaluation practice in semantic dependency parsing, all scores are micro-averaged F-measures (Peng et al., 2017; Wang et al., 2018) with labeled attachment scores (LAS). Model DM PAS PSD Avg. Peng+ 17 Freda3 85.3 89.0 76.4 84.4 Peng+ 18 86.7 77.1 IPS +ML 86.0 88.2 77.2 84.6 IPS +ML +RL 87.2‡ 88.8‡ 77.7‡ 85.3‡ Table 4: Labeled parsing performance on out-ofdomain test data. Avg. is the micro-averaged score of three formalisms. ‡ of the +RL models represents that the scores are statistically significant at p < 10−3 with their non-RL counterparts. The system we propose is the IPS parser trained with a multi-task objective and fine-tuned using reinforcement learning. This is referred to as IPS+ML+RL in the results tables. To highlight the contributions of the various components of our architecture, we also report ablation scores for the IPS parser without multi-task training nor reinforcement learning (IPS), with multi-task training (IPS+ML) and with reinforcement learning (IPS+RL). At inference time, we apply heuristics to avoid predicting circles during decoding (Camerini et al., 1980); see Supplementary Material, §A.1. This improves scores by 0.1 % or less, since predicted circles are extremely rare. We compare our proposed system with three state-ofthe-art SDP parsers: Freda3 of Peng et al. (2017), the ensemble model in Wang et al. (2018) and Peng et al. (2018). In Peng et al. (2018), they use syntactic dependency trees, while we do not use them in our models.7 The results of our experiments on in-domain dataset are also shown in Table 3. We observe that our basic IPS model achieves competitive scores in DM and PAS parsing. Multi-task learning of the shared BiLSTM (IPS+ML) leads to small improvements across the board, which is consistent with the results of Peng et al. (2017). The model trained with reinforcement learning (IPS+RL) performs better than the model trained by supervised learning (IPS). These differences are significant (p < 10−3). Most importantly, the combination of multi-task learning and policy gradient-based reinforcement learning (IPS+ML+RL) achieves the best results among all IPS models and the previous state of the art models, by some margin. We also obtain similar results for the out-of-domain 7Dozat and Manning (2018) report macro-averaged scores instead, as mentioned in their ACL 2018 talk, and their results are therefore not comparable to ours. For details, see the video of their talk on ACL2018 that is available on Vimeo. 2427 1st 2nd 3rd 4th Transitions Arc Length Dist. a) Supervised b) Reinforcement Dist. Figure 4: Arc length distributions: (a) Supervised learning (IPS+ML). (b) Reinforcement learning (IPS+ML+RL). The four lines correspond to the first to fourth transitions in the derivations. the position of chief financial officer , who will be hired from within the agency. Within weeks the unfolding Iran-Contra scandal took away Mr. Noriega’s insurance policy. Morgan will help evaluate DFC’s position and help determine alternatives. The U.S. Commerce Department reported a $ 10.77 billion deficit in August compared with ... 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 a) b) c) d) 2 1 3 4 SL RL SL RL SL RL SL RL lead the industry with a strong sales performance in the human and animal health-products segment. e) 1 2 4 3 1 5 3 2 5 4 SL RL Figure 5: Examples of clauses parsed with DM formalism. The underlined words are the semantic predicates of the argument words in rectangles in the annotation. The superscript numbers (SL) are the orders of creating arcs by IPS+ML and the subscript numbers (RL) are the orders by IPS+ML+RL. In the clause (a), we show a partial SDP graph to visualize the SDP arcs. Model DM PAS PSD Avg. Peng+ 17 Freda3 90.4 92.5 78.5 88.0 IPS +ML -Lemma 90.7 92.3 78.3 88.0 IPS +ML +RL -Lemma 91.2‡ 92.9‡ 78.8‡ 88.5‡ Table 5: Evaluation of our parser when not using lemma embeddings (for a more direct comparison with Freda3), on in-domain test datasets. ‡ of +RL models represents that the scores are statistically significant at p < 10−3 with their non-RL counterparts. datasets, as shown in Table 4. All improvements with reinforcement learning are also statistically significant (p < 10−3). Evaluating Our Parser without Lemma Since our baseline (Peng et al., 2017) does not rely on neither lemma or any syntactic information, we also make a comparison of IPS+ML and IPS+ML+RL trained with word and POS embeddings, but without lemma embeddings. The results are given in Table 5. We see that our model is still better on average and achieves better performance on all three formalisms. We also notice that the lemma information does not improve the performance in the PAS formalism. Effect of Reinforcement Learning Fig. 4 shows the distributions of the length of the created arcs in the first, second, third and fourth transitions for all words, in the various IPS models in the development corpus. These distributions show the length of the arcs the models tend to create in the first and later transitions. Since long arcs are harder to predict, an easy-first strategy would typically amount to creating short arcs first. In supervised learning (IPS+ML), there is a slight tendency to create shorter arcs first, but while the ordering is relatively consistent, the differences are small. This is in sharp contrast with the distributions we see for our policy gradient parser (IPS+ML+RL). Here, across the board, it is very likely that the first transition connects neighboring words; and very unlikely that neighboring words are connected at later stages. This suggests that reinforcement learning learns an easyfirst strategy of predicting short arcs first. Note 2428 that unlike easy-first algorithms in syntactic parsing (Goldberg and Nivre, 2013), we do not hardwire an easy-first strategy into our parser; but rather, we learn it from the data, because it optimizes our long-term rewards. We present further analyses and analyses on WSJ syntactic dependency trees in Appendix A.2. Fig. 5 shows four sentence excerpts from the development corpus, and the order in which arcs are created. We again compare the model trained with supervised learning (IPS+ML notated as SL here) to the model with reinforcement learning (IPS+ML+RL notated as RL here). In examples (a) and (b), the RL model creates arcs inside noun phrases first and then creates arcs to the verb. The SL model, in contrast, creates arcs with inconsistent orders. There are lots of similar examples in the development data. In clause (c), for example, it seems that the RL model follows a grammatical ordering, while the SL model does not. In the clause (d), it seems that the RL model first resolves arcs from modifiers, in “chief financial officer”, then creates an arc from the adjective phrase “, who will be hired”, and finally creates an arc from the external phrase “the position of”. Note that both the SL and RL models make an arc from “of” in stead of the annotated label of the word “position” in the phrase “the position of”. In the clause (e), the RL model resolve the arcs in the noun phrase “a strong sales performance” and then resolve arcs from the following prepositional phrase. Finally, the RL model resolve the arc from the word “with” that is the headword in the syntactic dependency tree. In the example (d) and (e), the RL model elaborately follows the syntactic order that are not given in any stages of training and parsing. 5 Conclusion We propose a novel iterative predicate selection (IPS) parsing model for semantic dependency parsing. We apply multi-task learning to learn general representations of parser configurations, and use reinforcement learning for task-specific fine-tuning. In our experiments, our multi-task reinforcement IPS model achieves a new state of the art for three SDP formalisms. Moreover, we show that fine-tuning with reinforcement learning learns an easy-first strategy and some syntactic features. Acknowledgements This work was done when Shuhei Kurita visited the University of Copenhagen. Shuhei Kurita was supported by JST ACT-I Grant Number JPMJPR17U8, Japan and partly supported by JST CREST Grant Number JPMJCR1301, Japan. Anders Søgaard was supported by a Google Focused Research Award. References M. Almeida and A. Martins. 2015. Lisbon: Evaluating turbosemanticparser on multiple languages and outof-domain data. Leemon C. Baird III. 1999. Reinforcement learning through gradient descent. School of Computer Science Carnegie Mellon University. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of the EMNLP, pages 349–359. P. M. Camerini, L. Fratta, and F. Maffioli. 1980. The k best spanning arborescences of a network. Networks, 10:91–110. Ann Copestake, Dan Flickinger, Ivan A. Sag, and Carl Pollard. 2005. Minimal recursion semantics: An introduction. In Research on Language & Computation, pages 3(4):281–332. Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the ACL (Short Papers), pages 484–490. Yantao Du, Fan Zhang, Xun Zhang, Weiwei Sun, and XiaojunWan. 2015. Peking: Building semantic dependency graphs with a hybrid parser. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the NAACL: HLT, pages 199–209, San Diego, California. J. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In COLING. Daniel Flickinger, Yi Zhang, and Valia Kordoni. 2012. Deepbank: Adynamically annotated treebank of the wall street journal. In In Proc. of TLT. Daniel Fried and Dan Klein. 2018. Policy gradient as a proxy for dynamic oracles in constituency parsing. In Proceedings of the ACL, pages 469–476. Michel Galley and Christopher D. Manning. 2009. Quadratic-time dependency parsing for machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 2429 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 773–781. Association for Computational Linguistics. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Human Language Technologies: NAACL, pages 742–750, Los Angeles, California. Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. pages 403–414. Jan Hajiˇc, Eva Hajiˇcov´a, Jarmila Panevov´a, Petr Sgall, Ondˇrej Bojar, Silvie Cinkov´a, Eva Fuˇc´ıkov´a, Marie Mikulov´a, Petr Pajas, Jan Popelka, Jiˇr´ı Semeck´y, Jana ˇSindlerov´a, Jan ˇStˇep´anek, Josef Toman, Zdeˇnka Ureˇsov´a, and Zdenˇek ˇZabokrtsk´y. 2012. Announcing prague czech-english dependency treebank 2.0. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 3153–3160. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In Proceedings of the EMNLP, pages 1923– 1933. Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for ucca. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1127– 1138. Association for Computational Linguistics. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. TACL, 4:313– 327. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173– 1182. Association for Computational Linguistics. Ji Ma, Jingbo Zhu, Tong Xiao, and Nan Yang. 2013. Easy-first POS tagging and dependency parsing with beam search. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 110–114, Sofia, Bulgaria. Association for Computational Linguistics. Andre Martins, Noah Smith, Mario Figueiredo, and Pedro Aguiar. 2011. Dual decomposition with many overlapping components. In Proceedings of the 2011 Conference on EMNLP, pages 238–249, Edinburgh, Scotland, UK. Andr´e F. T. Martins and Julia Kreutzer. 2017. Learning what’s easy: Fully differentiable neural easy-first taggers. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 349–362, Copenhagen, Denmark. Association for Computational Linguistics. Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on EMNLP-CoNLL, pages 122–131. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Yusuke Miyao, Takashi Ninomiya, and Jun’ichi. Tsujii. 2004. Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank. In In Proceedings of IJCNLP-04. Joakim Nivre and Mario Scholz. 2004b. Deterministic dependency parsing of english text. In Proceedings of Coling 2004, pages 64–70. COLING. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 915–926, Denver, Colorado. Association for Computational Linguistics. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Yi Zhang. 2014. Semeval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 63–72, Dublin, Ireland. Hao Peng, Sam Thomson, and Noah A. Smith. 2017. Deep multitask learning for semantic dependency parsing. In Proceedings of the ACL, pages 2037– 2048, Vancouver, Canada. Hao Peng, Sam Thomson, and Noah A. Smith. 2018a. Backpropagating through structured argmax using a spigot. In Proceedings of the 56th Annual Meeting of the ACL, pages 1863–1873. Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018b. Learning joint semantic parsers from disjoint data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1492–1502, New Orleans, Louisiana. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532– 1543. 2430 Corentin Ribeyre, Eric Villemonte de la Clergerie, and Djam´e Seddah. 2014. Alpage: Transition-based semantic graph parsing with syntactic features. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 97– 103, Dublin, Ireland. Association for Computational Linguistics and Dublin City University. Kenji Sagae and Jun’ichi Tsujii. 2008. Shift-reduce dependency DAG parsing. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 753–760. Coling 2008 Organizing Committee. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the ACL (Short Papers), pages 231–235. Valentin I. Spitkovsky, Hiyan Alshawi, Angel X. Chang, and Daniel Jurafsky. 2011. Unsupervised dependency parsing without gold part-of-speech tags. In Proceedings of the 2011 Conference on EMNLP, pages 1281–1290. Alper Tokg¨oz and Eryigit G¨ulsen. 2015. Transitionbased dependency dag parsing using dynamic oracles. In Proceedings of the ACL Student Research Workshop., pages 22–27. Yoshimasa Tsuruoka and Jun’ichi Tsujii. 2005. Bidirectional inference with the easiest-first strategy for tagging sequence data. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 467–474, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Yuxuan Wang, Wanxiang Che, Jiang Guo, and Ting Liu. 2018. A neural transition-based approach for semantic dependency graph parsing. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. pages 5–32. Springer. Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a dependency parser to improve smt for subject-object-verb languages. In Proceedings of HLT:NAACL, pages 245–253, Boulder, Colorado. Lidan Zhang and Kwok Ping Chan. 2009. Dependency parsing with energy-based reinforcement learning. In Proceedings of the IWPT, pages 234–237, Paris, France. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the ACL, pages 665–676, Valencia, Spain. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the EMNLP, pages 562–571.
2019
232
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2431–2441 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2431 GCDT: A Global Context Enhanced Deep Transition Architecture for Sequence Labeling Yijin Liu1∗, Fandong Meng2, Jinchao Zhang2, Jinan Xu1†, Yufeng Chen1 and Jie Zhou2 1Beijing Jiaotong University, China 2Pattern Recognition Center, WeChat AI, Tencent Inc, China [email protected] {fandongmeng, dayerzhang, withtomzhou}@tencent.com {jaxu,chenyf}@bjtu.edu.cn Abstract Current state-of-the-art systems for the sequence labeling tasks are typically based on the family of Recurrent Neural Networks (RNNs). However, the shallow connections between consecutive hidden states of RNNs and insufficient modeling of global information restrict the potential performance of those models. In this paper, we try to address these issues, and thus propose a Global Context enhanced Deep Transition architecture for sequence labeling named GCDT. We deepen the state transition path at each position in a sentence, and further assign every token with a global representation learned from the entire sentence. Experiments on two standard sequence labeling tasks show that, given only training data and the ubiquitous word embeddings (Glove), our GCDT achieves 91.96 F1 on the CoNLL03 NER task and 95.43 F1 on the CoNLL2000 Chunking task, which outperforms the best reported results under the same settings. Furthermore, by leveraging BERT as an additional resource, we establish new stateof-the-art results with 93.47 F1 on NER and 97.30 F1 on Chunking 1. 1 Introduction Sequence labeling tasks, including part-of-speech tagging (POS), syntactic chunking and named entity recognition (NER), are fundamental and challenging problems of Natural Language Processing (NLP). Recently, neural models have become the de-facto standard for high-performance systems. Among various neural networks for sequence labeling, bi-directional RNNs (BiRNNs), especially BiLSTMs (Hochreiter and Schmidhuber, 1997) have become a dominant method on ∗This work was done when Yijin Liu was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China †Jinan Xu is the corresponding author of the paper. 1Code is available at: https://github.com/Adaxry/GCDT. multiple benchmark datasets (Huang et al., 2015; Chiu and Nichols, 2016; Lample et al., 2016; Peters et al., 2017). However, there are several natural limitations of the BiLSTMs architecture. For example, at each time step, the BiLSTMs consume an incoming word and construct a new summary of the past subsequence. This procedure should be highly nonlinear, to allow the hidden states to rapidly adapt to the mutable input while still preserving a useful summary of the past (Pascanu et al., 2014). While in BiLSTMs, even stacked BiLSTMs, the transition depth between consecutive hidden states are inherently shallow. Moreover, global contextual information, which has been shown highly useful for model sequence (Zhang et al., 2018), is insufficiently captured at each token position in BiLSTMs. Subsequently, inadequate representations flow into the final prediction layer, which leads to the restricted performance of BiLSTMs. In this paper, we present a global context enhanced deep transition architecture to eliminate the mentioned limitations of BiLSTMs. In particular, we base our network on the deep transition (DT) RNN (Pascanu et al., 2014), which increases the transition depth between consecutive hidden states for richer representations. Furthermore, we assign each token an additional representation, which is a summation of hidden states of a specific DT over the whole input sentence, namely global contextual embedding. It’s beneficial to make more accurate predictions since the combinatorial computing between diverse token embeddings and global contextual embedding can capture useful representations in a way that improves the overall system performance. We evaluate our GCDT on both CoNLL03 and CoNLL2000. Extensive experiments on two benchmarks suggest that, merely given training data and publicly available word embeddings 2432 (Glove), our GCDT surpasses previous state-ofthe-art systems on both tasks. Furthermore, by exploiting BERT as an extra resource, we report new state-of-the-art F1 scores with 93.47 on CoNLL03 and 97.30 on CoNLL2000. The main contributions of this paper can be summarized as follows: • We are the first to introduce the deep transition architecture for sequence labeling, and further enhance it with the global contextual representation at the sentence level, named GCDT. • GCDT substantially outperforms previous systems on two major tasks of NER and Chunking. Moreover, by leveraging BERT as an extra resource to enhance GCDT, we report new state-of-the-art results on both tasks. • We conduct elaborate investigations of global contextual representation, model complexity and effects of various components in GCDT. 2 Background Given a sequence of X = {x1, x2, · · · , xN} with N tokens and its corresponding linguistic labels Y = {y1, y2, · · · , yN} with the equal length, the sequence labeling tasks aim to learn a parameterized mapping function fθ : X →Y from input tokens to task-specific labels. Typically, the input sentence is firstly encoded into a sequence of distributed representations X = {x1, x2, · · · , xN} by character-aware and pretrained word embeddings. The majority of highperformance models use bidirectional RNNs, BiLSTMs in particular, to encode the token embeddings X into context-sensitive representations for the final prediction. Additionally, it’s beneficial to model and predict labels jointly, thus a subsequent conditional random field (CRF Lafferty et al., 2001) is commonly utilized as a decoder layer. At the training stage, those models maximize the log probability of the correct sequence of tags as follows: log(p(y|X)) = s(X, y) −log( X ey∈Yx es(X,ey)) (1) where s(·) is the score function and Yx is the set of all possible sequence of tags. Typically, the Viterbi algorithm (Forney, 1973) is utilized to search the label sequences with maximum score when decoding: y∗= arg max ey∈Yx s(x, ey) (2) 3 GCDT 3.1 Overview In this section, we start with a brief overview of our presented GCDT and then proceed to structure the following sections with more details about each submodule. As shown in Figure 1, there are three deep transition modules in our model, namely global contextual encoder, sequence labeling encoder and decoder accordingly. Token Representation Given a sentence X = {x1, x2, ..., XN} with N tokens, our model first captures each token representation xt by concatenating three primary embeddings: xt = [ct; wt; g] (3) 1. Character level word embedding ct is acquired from Convolutional Neural Network. (CNN) (dos Santos and Zadrozny, 2014) 2. Pre-trained word embedding wt is obtained from the lookup table initialized by Glove2. 3. Global contextual embedding g is extracted from bidirectional DT, and more details will be described in the following paragraphs. The global embedding g is computed by mean pooling over all hidden states {hg 1, hg 2, · · · , hg N} of global contextual encoder (right part in Figure 1). For simplicity, we can take “DT” as a reinforced Gated Recurrent Unit (GRU Chung et al., 2014), and more details about DT will be described in the next section. Thus g is computed as follows: g = 1 N n X t=1 hg t (4) hg t = [−→ h g t ; ←− h g t ] (5) −→ h g t = −−→ DTg(ct, wt; θ−−→ DT g) (6) ←− h g t = ←−− DTg(ct, wt; θ←−− DT g) (7) 2https://nlp.stanford.edu/projects/glove/ 2433 ŏ ŏ ŏ ŏ ŏ ŏ ŏ Mean Pooling ŏ ŏ ŏ ŏ ŏ ŏ ŏ ŏ Global Contextual Encoder Sequence Labeling Encoder <s> y1 yn-1 ŏ Sequence Labeling Decoder yt-1 ŏ ŏ Softmax ht y1 yn ŏ y2 st h1 h2 hn ŏ Word Char Global Forward DT cell Backward DT cell Word Embedding Plain DT cell ŏ ŏ ŏ ŏ ŏ ŏ x1 x2 xn ŏ ŏ ŏ ŏ x1 xn-1 xn ŏ ŏ ŏ ŏ w1 c1 x1 x2 xn ŏ x1 xn-1 xn ŏ w1 c1 g w2 c2 g wn cn g w1 c1 g cn-1 g wn cn g wn-1 w2 c2 wn cn w1 c1 wn cn wn-1 cn-1 ŏ Global ŏ Figure 1: Overview of GCDT. The global contextual encoder (on the right) serves as an enhancement of token representation. The sequence labeling encoder and decoder (on the left) take charge of the task-specific predictions. Sequence Labeling Encoder Subsequently, the concatenated token embeddings xt (Eq. 3) is fed into the sequence labeling encoder (bottom left part in Figure 1). ht = [−→ ht; ←− ht] (8) −→ ht = −−→ DTen(xt, −→ h t−1; θ−−→ DT en) (9) ←− ht = ←−− DTen(xt, ←− h t−1; θ←−− DT en) (10) Sequence Labeling Decoder Considering the t-th word in this sentence, the output of sequence labeling encoder ht along with the past label embedding yt−1 are fed into the decoder (top left part in Figure 1). Subsequently, the output of decoder st is transformed into lt for the final softmax over the tag vocabulary. Formally, the label of word xt is predicted as the probabilistic equation (Eq. 13) st = DTde(ht, yt−1; θDTde) (11) lt = stWl + bl (12) P(yt = j|x) = softmax(lt)[j] (13) As we can see from the above procedures and Figure 1, our GCDT firstly encodes the global contextual representation along the sequential axis by DT, which is utilized to enrich token representations. At each time step, we encode the past label information jointly using the sequence labeling decoder instead of resorting to CRF. Additionally, we employ beam search algorithm to infer the most probable sequence of labels when testing. 3.2 Deep Transition RNN Deep transition RNNs extend conventional RNNs by increasing the transition depth of consecutive hidden states. Previous studies have shown the superiority of this architecture on both language modeling (Pascanu et al., 2014) and machine translation (Barone et al., 2017; Meng and Zhang, 2019). Particularly, Meng and Zhang (2019) propose to maintain a linear transformation path throughout the deep transition procedure with a linear gate to enhance the transition structure. Following Meng and Zhang (2019), the deep transition block in our hierarchical model is composed of two key components, namely Linear Transformation enhanced GRU (L-GRU) and Transition GRU (T-GRU). At each time step, LGRU first encodes each token with an additional linear transformation of the input embedding, then the hidden state of L-GRU is passed into a chain of 2434 T-GRU connected merely by hidden states. Afterwards, the output “state” of the last T-GRU for the current time step is carried over as “state” input of the first L-GRU for the next time step. Formally, in a unidirectional network with transition number of L, the hidden state of the t-th token in a sentence is computed as: h0 i = L-GRU(xi, hL i−1) (14) hj i = T-GRUj(hj−1 i ) 1 ≤j ≤L (15) Linear Transformation Enhanced GRU LGRU extends the conventional GRU by an additional linear transformation of the input token embeddings. At time step t, the hidden state of LGRU is computed as follows: ht = (1 −zt) ⊙ht−1 + zt ⊙eht (16) eht = tanh(Wxhxt + rt ⊙(Whhht−1)) + lt ⊙Wxxt (17) where Wxh and Whh are parameter matrices, and reset gate rt and update gate zt are same as GRU: rt = σ(Wxrxt + Whrht−1) (18) zt = σ(Wxzxt + Whzht−1) (19) The linear transformation Wxxt in candidate hidden state eht (Eq. 17) is regulated by the linear gate lt, which is computed as follows: lt = σ(Wxlxt + Whlht−1) (20) Transition GRU T-GRU is a special case of conventional GRU, which only takes hidden states from the adjacent lower layer as inputs. At time step t at transition depth l, the hidden state of TGRU is computed as follows: hl t = (1 −zl t) ⊙hl−1 t + zl t ⊙eht l (21) eht l = tanh(rl t ⊙(Wl hhl−1 t )) (22) Reset gate rt and update gate zt also only take hidden states as input, which are computed as: rl = σ(Wl rhl−1) (23) zt = σ(Wl zhl−1) (24) As indicated above, at each time step of our deep transition block, there is a L-GRU in the bottom and several T-GRUs on the top of L-GRU. 3.3 Local Word Representation Charater-aware word embeddings It has been demonstrated that character level information (such as capitalization, prefix and suffix) (Collobert et al., 2011; dos Santos and Zadrozny, 2014) is crucial for sequence labeling tasks. In our GCDT, the character sets consist of all unique characters in datasets besides the special symbol “PAD” and “UNK”. We use one layer of CNN followed by max pooling to generate character-aware word embeddings. Pre-trained word embeddings The pre-trained word embeddings have been indicated as a standard component of neural network architectures for various NLP tasks. Since the capitalization feature of words is crucial for sequence labeling tasks (Collobert et al., 2011), we adopt word embeddings trained in the case sensitive schema. Both the character-aware and pre-trained word embeddings are context-insensitive, which are called local word representations compared with global contextual embedding in the next section. 3.4 Global Contextual Embedding We adopt an independent deep transition RNN named global contextual encoder (right part in Figure 1) to capture global features. In particular, we transform the hidden states of global contextual encoder into a fixed-size vector with various strategies, such as mean pooling, max pooling and self-attention mechanism (Vaswani et al., 2017). According to the preliminary experiments, we choose mean pooling strategy considering the balance between effect and efficiency. In conventional BiRNNs, the global contextual feature is insufficiently modeled at each position, as the nature of recurrent architecture makes RNN partial to the most recent input token. While our context-aware representation is incorporated with local word embeddings directly, which assists in capturing useful representations through combinatorial computing between diverse local word embeddings and the global contextual embedding. We further investigate the effects on positions where the global embedding is used. (Section 5.1) 2435 4 Experiments 4.1 Datasets and Metric NER The CoNLL03 NER task (Sang and De Meulder, 2003) is tagged with four linguistic entity types (PER, LOC, ORG, MISC). Standard data includes train, development and test sets. Chunking The CoNLL2000 Chunking task (Sang and Buchholz, 2000) defines 11 syntactic chunk types (NP, VP, PP, etc.). Standard data includes train and test sets. Metric We adopt the BIOES tagging scheme for both tasks instead of the standard BIO2, since previous studies have highlighted meaningful improvements with this scheme (Ratinov and Roth, 2009). We take the official conlleval 3 as the token-level F1 metric. Since the data size if relatively small, we train each final model for 5 times with different parameter initialization and report the mean and standard deviation F1 value. 4.2 Implementation Details All trainable parameters in our model are initialized by the method described by Glorot and Bengio (2010). We apply dropout (Srivastava et al., 2014) to embeddings and hidden states with a rate of 0.5 and 0.3 respectively. All models are optimized by the Adam optimizer (Kingma and Ba, 2014) with gradient clipping of 5 (Pascanu et al., 2013). The initial learning rate α is set to 0.008, and decrease with the growth of training steps. We monitor the training process on the development set and report the final result on the test set. One layer CNN with a filter of size 3 is utilized to generate 128-dimension word embeddings by max pooling. The cased, 300d Glove is adapted to initialize word embeddings, which is frozen in all models. In the auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed all the time. Empirically, We assign the following hyperparameters with default values except mentioned later. We set batch size to 4096 at the token level, transition number to 4, hidden size of sequence labeling encoder and decoder to 256, hidden size of global contextual encoder to 128. 3https://www.clips.uantwerpen.be/conll2000/chunking/ conlleval.txt Models F1 (Collobert et al., 2011)* 89.59 (Huang et al., 2015)* 90.10 (Passos et al., 2014)* 90.90 (Lample et al., 2016) 90.94 (Yang et al., 2016)* 90.94 (Luo et al., 2015)* 91.20 (Ma and Hovy, 2016) 91.21 (Yang et al., 2017b)*† 91.26 (Zhang et al., 2018) 91.57 (Yang et al., 2017a) 91.62 (Chiu and Nichols, 2016)*† 91.62 ± 0.33 (Xin et al., 2018) 91.64 ± 0.17 GCDT 91.96 ± 0.04 GCDT + BERTLARGE 93.47 ± 0.03 Table 1: F1 scores on CoNLL03. † refers to models trained on both training and development set. * refers to adopting external task-specific resources. Models F1 (Collobert et al., 2011)* 94.32 (Huang et al., 2015)* 94.46 (Yang et al., 2017b) 94.66 (Zhai et al., 2017) 94.72 (Hashimoto et al., 2017) 95.02 (Søgaard and Goldberg, 2016) 95.28 (Xin et al., 2018) 95.29 ± 0.08 GCDT 95.43 ± 0.06 GCDT + BERTLARGE 97.30 ± 0.03 Table 2: F1 scores on CoNLL2000 Chunking task. * refers to adopting external task-specific resources (like Gazetteers or annotated data). 4.3 Main Results The main results of our GCDT on the CoNLL03 and CoNLL2000 are illustrated in Table 1 and Table 2 respectively. Given only standard training data and publicly available word embeddings, our GCDT achieves state-of-the-art results on both tasks. It should be noted that some results on NER are not comparable to ours directly, as their final models are trained on both training and development data 4. More notably, our GCDT surpasses the models that exploit additional task-specific resources or annotated corpora (Luo et al., 2015; Yang et al., 2017b; Chiu and Nichols, 2016). Additionally, we conduct experiments by leveraging the well-known BERT as an external resource for relatively fair comparison with models 4We achieve F1 score of 92.18 when training on both training and development data without extra resources. 2436 Models F1 (Rei, 2017) 86.26 (Liu et al., 2017) 91.71 ± 0.10 (Peters et al., 2017)† 91.93 ± 0.19 (Peters et al., 2018) 92.20 (Clark et al., 2018) 92.61 (2018) BERTBASE 92.40 (2018) BERTLARGE 92.80 (Akbik et al., 2018)† 93.09 GCDT + BERTLARGE 93.47 ± 0.03 Table 3: F1 scores on the CoNL03 NER task by leveraging language model, † refers to models trained on both training and development data. We establish new state-of-the-art result on this task. Models F1 (Rei, 2017) 93.88 (Liu et al., 2017) 95.96 ± 0.08 (Peters et al., 2017) 96.37 ± 0.05 (Akbik et al., 2018) 96.72 ± 0.05 (Clark et al., 2018) 97.00 GCDT + BERTLARGE 97.30 ± 0.03 Table 4: F1 scores on the CoNLL2000 Chunking task by leveraging language model. We establish new stateof-the-art result on this task. that utilize external language models trained on massive corpora. Especially, Rei (2017) and Liu et al. (2017) build task-specific language models only on supervised data. Table 3 and Table 4 show that our GCDT outperforms previous state-of-theart results substantially at 93.47 (+0.38) on NER and 97.30 (+0.30) on Chunking when contrasted with a collection of highly competitive baselines. 5 Analysis We choose the CoNLL03 NER task as example to elucidate the properties of our GCDT and conduct several additional experiments. 5.1 Where to Use the Global Representation? In this experiment, we investigate the effects of locations on the global contextual embedding in our hierarchical model. In particular, we use the global embedding g to augment: • input of final softmax layer ; xsoftmax k = [hdecoder k ; yk−1; g] • input of sequence labeling decoder; xdecoder k = [hencoder k ; yk−1; g] # Use global embedding at F1 0 None 91.60 1 Input of final softmax 91.48 2 Input of sequence labeling decoder 91.45 3 Input of sequence labeling encoder 91.96 Table 5: Comparison of CoNLL03 test F1 when the global contextual embedding is used at different layers. • input of sequence labeling encoder; xencoder k = [wk; ck; g] Table 5 shows that the global embedding g improves performance when utilized at the relative low layer (row 3) , while g may do harm to performances when adapted at the higher layers (row 0 vs. row 1 & 2). In the last option, g is incorporated to enhance the input token representation for sequence labeling encoder, the combinatorial computing between the multi-granular local word embeddings (wk and ck) and global embedding g can capture more specific and richer representations for the prediction of each token, and thus improves overall system performance. While the other two options (row 1, 2) concatenate the highly abstract g with hidden states (hencoder k or hdecoder k ) from the higher layers, which may bring noise to token representation due to the similar feature spaces and thus hurt task-specific predictions. 5.2 Comparing with Stacked RNNs Although our proposed GCDT bears some resemblance to the conventional stacked RNNs, they are very different from each other. Firstly, although the stacked RNNs can process very deep architectures, the transition depth between consecutive hidden states in the token level is still shallow. Secondly, in the stacked RNNs, the hidden states along the sequential axis are simply fed into the corresponding positions of the higher layers, namely only position-aware features are transmitted in the deep architecture. While in GCDT, the internal states in all token position of the global contextual encoder are transformed into a fixedsize vector. This contextual-aware representation provides more general and informative features of the entire sentence compared with stacked RNNs. To obtain rigorous comparisons, we stack two layers of deep transition RNNs instead of conventional RNNs with similar parameter numbers of GCDT. According to the results in Table 6, the stacked-DT improves the performance of the orig2437 Model # Parameters F1 DT 5.6M 91.60 stacked-DT 8.4M 91.61 GCDT 7.4M 91.96 Table 6: Comparison of CoNLL03 test F1 between stacked RNNs and GCDT. inal DT slightly, while there is still a large margin between GCDT and the stacked-DT. As we can see, our GCDT achieves a much better performance than stacked-DT with a smaller parameter size, which further verifies that our GCDT can effectively leverage global information to learn more useful representations for sequence labeling tasks. 5.3 Ablation Experiments We conduct ablation experiments to investigate the impacts of various components in GCDT. More specifically, we remove one kind of token embedding from char-aware, pre-trained and global embeddings for sequence labeling encoder each time, and utilize DT or conventional GRU with similar model sizes 5. Results of different combinations are presented in Table 7. Given the same input embeddings, DT surpasses the conventional GRU substantially in most cases, which further demonstrates the superiority of DT in sequence labeling tasks. Our observations on character-level and pre-trained word embeddings suggest that they have a significant impact on highly competitive results (row 1 & 3 vs. row 5), which is consistent with previous work (dos Santos and Zadrozny, 2014; Lample et al., 2016). Furthermore, the global contextual embedding substantially improves the performances on both DT and GRU based models (row 6 & 7 vs. row 4 & 5). 5.4 Effect of BERT WordPiece is adopted to tokenize sequence in BERT, which may cut a word into pieces, such as converting “Johanson” into “Johan ##son”. Therefore, additional efforts should be taken to maintain alignments between input tokens and their corresponding labels. Three strategies are conducted to obtain the exclusive BERT embedding of each token in a sequence. Firstly, we take the first subword as the whole word embedding after tokenization, which is employed in the original paper of 5To avoid the effect of various model size, we fine tuning hidden size of each model, and more details in Section 5.5 # Embeddings RNN F1 0 No char GRU 91.14 1 No char DT 90.94 2 No Glove GRU 87.23 3 No Glove DT 88.59 4 No global GRU 91.32 5 No global DT 91.60 6 All GRU 91.42 7 All DT 91.96 Table 7: Ablation experiments on the CoNLL03 to investigate the impacts of various components, where “char” indicates character-aware word embeddings, “Glove” indicates pre-trained word embeddings, and “global” indicates global contextual embedding. BERT F1 Type Layer Pooling BASE 6 first 92.70 max 92.88 mean 92.99 12 first 92.89 max 92.74 mean 92.92 LARGE 12 first 92.88 max 93.23 mean 93.36 18 first 93.18 max 93.07 mean 93.47 24 first 92.57 max 92.60 mean 92.83 Table 8: Comparison of CoNLL03 F1 scores when various types, layers and pooling strategies of BERT are employed. “first” indicates the first sub-word embedding, “mean” and “max” refer to mean and max pooling correspondingly. BERT (Devlin et al., 2018). Mean and max poolings are used as the latter two strategies. Results of various combinations of BERT type, layer and pooling strategy are illustrated in Table 8. It’s reasonable that BERT trained on large model surpasses the smaller one in most cases due to the larger model capacity and richer contextual representation. For the pooling strategy, “mean” is considered to capture more comprehensive representations of rare words than “first” and “max”, thus better average performances. Additionally, we hypothesize that the higher layers in BERT encode more abstract and semantic features, while the lower ones prefer general and syntax infor2438 # Global Contextual Encoder Sequence Labeling Module # Parameters F1 0 GRU-384 GRU-384 7.8M 91.42 1 GRU-384 DT4-256 9.5M 91.53 2 GRU-512 DT4-256 11.2M 91.49 3 DT2-128 GRU-384 5.7M 91.45 4 DT2-128 DT4-256 7.2M 91.72 5 DT4-128 DT4-256 7.4M 91.96 Table 9: F1 scores on the CoNLL03 and parameter sizes of various models, where “GRU-384” indicates the conventional GRU with hidden size of 384, while “DT2-128” refers to deep transition RNN with transition number of 2 and hidden size of 128, similarly for “DT4-256”. mation, which is more helpful for our NER and Chunking tasks. These hypotheses are consistent with results emerged in Table 8. 5.5 Model Complexity One way of measuring the complexity of a neural model is through the total number of trainable parameters. In GCDT, the global contextual encoder increases parameter numbers of the sequence labeling encoder due to the enlargement of input dimensions, thus we run additional experiments to verify whether the increment of parameters has a great affection on performances. Empirically, we replace DT with conventional GRU in the global contextual encoder and sequence labeling module (both encoder and decoder) respectively. Results of various combinations are shown in Table 9. Observations on parameter numbers show that DT outperforms GRU substantially, with a smaller size (row 4 & 5 vs. row 0). From the perspective of global contextual encoder, DT gives slightly better result compared with GRU (row 3 vs. row 0). We observe similar results in the sequence labeling module (row 1 & 2 vs. row 0). Intuitively, it should further improve performance when utilizing DT in both modules, which is consistent with the observations in Table 9 (row 4 & 5 vs. row 0). 6 Related Work Neural Sequence Labeling Collobert et al. (2011) propose a seminal neural architecture for sequence labeling, which learns useful representation from pre-trained word embeddings instead of hand-crafted features. Huang et al. (2015) develop the outstanding BiLSTMs-CRF architecture, which is improved by incorporating character-level LSTM (Lample et al., 2016), GRU (Yang et al., 2016), CNN (dos Santos and Zadrozny, 2014; Xin et al., 2018), IntNet (Xin et al., 2018). The shallow connections between consecutive hidden states in those models inspire us to deepen the transition path for richer representation. More recently, there has been a growing body of work exploring to leverage language model trained on massive corpora in both character level (Peters et al., 2017, 2018; Akbik et al., 2018) and token level (Devlin et al., 2018). Inspired by the effectiveness of language model embeddings, we conduct auxiliary experiments by leveraging the well-known BERT as an additional feature. Exploit Global Information Chieu and Ng (2002) explore the usage of global feature in the whole document by the co-occurrence of each token, which is fed into a maximum entropy classifier. With the widespread application of distributed word representations (Mikolov et al., 2013) and neural networks (Collobert et al., 2011; Huang et al., 2015) in sequence labeling tasks, the global information is encoded into hidden states of BiRNNs. Specially, Yang et al. (2017a) leverage global sentence patterns for NER reranking. Inspired by the global sentence-level representation in S-LSTM (Zhang et al., 2018), we propose a more concise approach to capture global information, which has been demonstrated more effective on sequence lableing tasks. Deep Transition RNN Deep transition recurrent architecture extends conventional RNNs by increasing the transition depth between consecutive hidden states. Previous studies have shown the superiority of this architecture on both language model (Pascanu et al., 2014) and machine translation (Barone et al., 2017; Meng and Zhang, 2019). We follow the deep transition architecture in (Meng and Zhang, 2019), and extend it into a hierarchical model with the global contextual representation at the sentence level for sequence labeling tasks. 2439 7 Conclusion We propose a novel hierarchical neural model for sequence labeling tasks (GCDT), which is based on the deep transition architecture and motivated by global contextual representation at the sentence level. Empirical studies on two standard datasets suggest that GCDT outperforms previous state-ofthe-art systems substantially on both CoNLL03 NER task and CoNLL2000 Chunking task without additional corpora or task-specific resources. Furthermore, by leveraging BERT as an external resource, we report new state-of-the-art F1 scores of 93.47 on CoNLL03 and 97.30 on CoNLL2000. In the future, we would like to extend GCDT to other analogous sequence labeling tasks and explore its effectiveness on other languages. Acknowledgments Liu, Xu, and Chen are supported by the National Nature Science Foundation of China (Contract 61370130, 61473294 and 61502149), and Beijing Natural Science Foundation under Grant No. 4172047, and the Fundamental Research Funds for the Central Universities (2015JBM033), and the International Science and Technology Cooperation Program of China under grant No. 2014DFA11350. We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Association for Computational Linguistics. Antonio Valerio Miceli Barone, Jindrich Helcl, Rico Sennrich, Barry Haddow, and Alexandra Birch. 2017. Deep architectures for neural machine translation. CoRR, abs/1707.07631. Hai Leong Chieu and Hwee Tou Ng. 2002. Named entity recognition: A maximum entropy approach using global information. In COLING 2002: The 19th International Conference on Computational Linguistics. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Junyoung Chung, C¸ aglar G¨ulc¸ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914– 1925. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. G David Forney. 1973. The viterbi algorithm. Proceedings of the IEEE, 61(3):268–278. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. Kazuma Hashimoto, caiming xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint manytask model: Growing a neural network for multiple nlp tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923–1933. Association for Computational Linguistics. Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. 2440 Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng, and Jiawei Han. 2017. Empower sequence labeling with task-aware neural language model. CoRR, abs/1709.04109. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 879–888. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Association for Computational Linguistics. Fandong Meng and Jinchao Zhang. 2019. DTMT: A novel deep transition architecture for neural machine translation. AAAI. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to construct deep recurrent neural networks. In ICLR. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International conference on machine learning, pages 1310–1318. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 78–86. Association for Computational Linguistics. Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756–1765. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the thirteenth conference on computational natural language learning, pages 147–155. Association for Computational Linguistics. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2121–2130. Association for Computational Linguistics. Erik F Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task: Chunking. arXiv preprint cs/0009008. Erik F Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. arXiv preprint cs/0306050. C´ıcero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In ICML. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231–235. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Yingwei Xin, Ethan Hart, Vibhuti Mahajan, and JeanDavid Ruvini. 2018. Learning better internal structure of words for sequence labeling. CoRR, abs/1810.12443. Jie Yang, Yue Zhang, and Fei Dong. 2017a. Neural reranking for named entity recognition. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 784–792. INCOMA Ltd. Zhilin Yang, Ruslan Salakhutdinov, and William Cohen. 2016. Multi-task cross-lingual sequence tagging from scratch. arXiv preprint arXiv:1603.06270. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017b. Transfer learning for sequence tagging with hierarchical recurrent networks. In ICLR. Feifei Zhai, Saloni Potdar, Bing Xiang, and Bowen Zhou. 2017. Neural models for sequence chunking. CoRR, abs/1701.04027. 2441 Yue Zhang, Qi Liu, and Linfeng Song. 2018. Sentencestate lstm for text representation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 317–327. Association for Computational Linguistics.
2019
233
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2442–2452 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2442 Unsupervised Learning of PCFGs with Normalizing Flow Lifeng Jin Department of Linguistics The Ohio State University [email protected] Finale Doshi-Velez Harvard University [email protected] Timothy Miller Boston Children’s Hospital & Harvard Medical School [email protected] William Schuler Department of Linguistics The Ohio State University [email protected] Lane Schwartz Department of Linguistics University of Illinois at Urbana-Champaign [email protected] Abstract Unsupervised PCFG inducers hypothesize sets of compact context-free rules as explanations for sentences. These models not only provide tools for low-resource languages, but also play an important role in modeling language acquisition (Bannard et al., 2009; Abend et al., 2017). However, current PCFG induction models, using word tokens as input, are unable to incorporate semantics and morphology into induction, and may encounter issues of sparse vocabulary when facing morphologically rich languages. This paper describes a neural PCFG inducer which employs context embeddings (Peters et al., 2018) in a normalizing flow model (Dinh et al., 2015) to extend PCFG induction to use semantic and morphological information1. Linguistically motivated similarity penalty and categorical distance constraints are imposed on the inducer as regularization. Experiments show that the PCFG induction model with normalizing flow produces grammars with state-of-the-art accuracy on a variety of different languages. Ablation further shows a positive effect of normalizing flow, context embeddings and proposed regularizers. 1 Introduction Unsupervised PCFG inducers (Jin et al., 2018b) automatically bracket sentences into nested spans, and label these spans with consistent, linguistically relevant syntactic categories, which may be useful in downstream applications or linguistic research on under-resourced languages. Their success also provides evidence for learnability of grammar in absence of strong linguistic universals (MacWhinney and Bates, 1993; Plunkett and Wood, 2004; Bannard et al., 2009). However, current PCFG induction models, using word tokens 1The code can be found at https://github.com/ lifengjin/acl_flow as input, are unable to incorporate semantics and morphology into induction, and may encounter issues of sparse vocabulary when facing morphologically rich languages. This paper describes a PCFG induction model which exploits recent advances in deep generative models and context embeddings to generalize over rare, morphologically rich forms. We contextualize a PCFG’s terminal emission rules with context embeddings (Peters et al., 2018) as observations, in order to bring context and subword information into the model. Probabilities for these contextualized terminal emission rules are modeled by transforming distributions with normalizing flow (Rezende and Mohamed, 2015; Dinh et al., 2015; He et al., 2018). Through invertible transformations, flow models transform simple distributions (e.g. Gaussian) into complex and potentially multi-modal distributions over observation vectors. These improvements help increase the expressivity of the induction model and give the model the ability to generalize over rare words, but still preserve the tractability of marginal likelihood computation so that inference is possible with marginal likelihood maximization. Experiments described in this paper show that the model is able to achieve state-of-the-art or competitive results on multiple languages compared with existing PCFG induction and unlabeled tree induction models, especially on languages where complex morphology may cause induction models with discrete observations to succumb to data sparsity. Further analyses show (1) that the flow-based inducer is able to use morphological and semantic information in embeddings for grammar induction, (2) that the model produces consistent and meaningful labels at phrasal and lexical levels, and (3) that both the normalizing flow and the linguistically-motivated regularization terms make substantial improvements to 2443 parsing accuracy. 2 PCFGs with vector terminals We first consider factoring the Chomsky normal form PCFG with C non-terminal categories into two separate parts: binary-branching nonterminal expansion rule2 probabilities, and unarybranching terminal emission rule probabilities. Given a tree as a set τ of nodes η undergoing non-terminal expansions cη →cη1 cη2 (where η ∈{1, 2}∗is a Gorn address specifying a path of left or right branches from the root), and a set τ′ of nodes η undergoing terminal emissions cη →xη (where xη is an embedding for the word at node η), the marginal probability of a sentence σi can be computed as: P(σi) = X τ,τ′ Y η∈τ P(cη →cη1 cη2) · Y η∈τ′ P(cη →xη) (1) We first define a set of Bernoulli distributions that distribute probability mass between these two sets of rules: P(Term = 1 | cη) = 1 1 + exp(−δ⊤cηd), (2) where cη is a non-terminal category, δcη is a Kronecker delta function – a vector with value one at index cη and zeros everywhere else – and δ⊤ cηd is a parameter for the Bernoulli distribution of cη with d ∈RC. Binary-branching non-terminal expansion rule probabilities for a non-terminal category cη are defined as: P(cη →cη1 cη2) = P(Term = 0 | cη) · exp(δ⊤ cηN)(δcη1 ⊗δcη2) exp(δ⊤cηN)1 (3) where ⊗is a Kronecker product, cη1 is the category of the left child, cη2 is the category of the right child, and δ⊤ cηN is a parameter vector for the multinomial distribution of the category cη with N ∈RC×C2. The contextualized unary-branching terminal emission rule probabilities for a preterminal category cη are defined as: P(cη →xη) = P(Term = 1 | cη)· fcη(xη; δ⊤ cηL) (4) 2They include the expansion rules generating the top node in the tree. where the terminal at node η is an observed word token, xη ∈RD is the vectorial representation of that token, fcη is a probability density or mass function, and δ⊤ cηL is a parameter vector for the probability function of the category cη. We can recover the multinomial PCFG formulation by setting xη to be a one-hot word representation and the probability function fcη to be a multinomial distribution parameterized by δ⊤ cηL. We can also set xη to be a word embedding and fcη to be Gaussian distributions parameterized by δ⊤ cηL, giving us a PCFG with Gaussian emission. In order to incorporate more information into the induction model, context embeddings (Peters et al., 2018) can be used here for xη. The ELMo model combines learned word embeddings with character embeddings through CNN encoders, and composes contextualized embeddings with bidirectional LSTMs over the combined representations. The output from the BiLSTM contains both subword information, word information and context information and is used as contextualized embeddings for words. While simple D-dimensional multivariate Gaussians can be used as the emission density f, it is unrealistic to assume that such embeddings follow simple Gaussian distributions. This work explores more complex transformed distributions using normalizing flows. 3 Normalizing flows Flow models (Dinh et al., 2015, 2017; Kingma and Dhariwal, 2018) are a class of deep generative models that model unknown yet complex distributions by transforming the observation through a series of invertible transformations to create latent representations to be used with known distributions like Gaussians. For PCFG induction with embeddings, we first consider the generative story for the observed embeddings. Let cη be a category label at the node η. M ∈RC×D is the matrix of the means of the Gaussian distributions for the latent representations, and S ∈RC×D the diagonal covariances with L = [M; S]. A probability model over trees may be defined as follows: 1. Sample an expansion decision Term ∼ Bernoulli  1 1+exp(−δcη ⊤d)  to expand node η with category cη to a lexical item, or to a binary branch. 2. If expanded as a binary branch (Term=0), given the category of the node cη, 2444 sample a non-terminal expansion, cη1 cη2 ∼Mult  exp(δcη ⊤N) exp(δcη ⊤N)1  . 3. If lexically expanded (Term = 1), sample from Gaussian with diagonal covariance over latent representations: hη ∼ N(δcη ⊤M, diag(δcη ⊤S)). 4. Again, if Term=1, transform the latent representation deterministically to generate the observed embedding xη for the token at η: xη = g(hη). In order to compute the likelihood given the observation, we need to invert this process. If we integrate over x′ η = g(hη), with the change-ofvariable formula, we have: fcη(xη; δ⊤ cηL) = Z P(cη →hη) δ(xη −g(hη)) dhη = Z P(cη →g−1(x′ η)) δ(xη −x′ η) det∂g−1 ∂x′η dx′ η = P(cη →g−1(xη)) · det∂g−1 ∂xη , (5) where δ here is the Dirac delta function. This can be used to directly compute the likelihood of the observed embedding exactly given a category. In order to make this calculation tractable, the requirements on g−1 are usually (1) that it is invertible, and (2) that computing the log Jacobian determinant is possible without calculating the full Jacobian matrix or its full determinant. Note that g need not be explicitly constructed as it is usually only used in generation, not in inference. There have been many proposed invertible functions that can be used as g−1. The volume preserving invertible transformation is first proposed by Dinh et al. (2015) in the NICE model and later used in unsupervised learning (He et al., 2018). Because of the volume preserving property, the log Jacobian determinant is always 0. This property may allow the structural features of the original embedding space to be better preserved than other, less restrictive, invertible functions. The invertible transformation g−1 consists of I stacked-up coupling layers. The input x to it is divided into two equal parts h(0) 1 , h(0) 2 : g−1 h(0) 1 h(0) 2   = h(I) 1 h(I) 2 , (6) and the coupling layers in g−1 transform the two parts at alternating layers: h(i−1) 1 h(i−1) 2 = h(i−2) 1 h(i−2) 2 + q(i−1)(h(i−2) 1 ) ; h(i) 1 h(i) 2 = h(i−1) 1 + q(i)(h(i−1) 2 ) h(i−1) 2 . (7) The volume-preserving restriction is removed in the coupling layer in the Real NVP model (Dinh et al., 2017), in which the coupling layers transform the inputs as follows: h(i−1) 1 h(i−1) 2 = h(i−2) 1 h(i−2) 2 ⊙exp(q(i−1) 1 (h(i−2) 1 )) + q(i−1) 2 (h(i−2) 1 ) ; h(i) 1 h(i) 2 = h(i−1) 1 ⊙exp(q(i) 1 (h(i−1) 2 )) + q(i) 2 (h(i−1) 2 ) h(i−1) 2 , (8) where ⊙is a Hadamard product. All q : RD/2 → RD/2 in both models can be arbitrary nonlinear transformations. For Real NVP, the log Jacobian determinant is: I/2 X i=1  q(2i−1) 1 (h(2i−2) 1 ) + q(2i) 1 (h(2i−1) 2 ) ⊤ 1. (9) 4 Regularization In order to avoid undesirable yet possible grammars, we impose two linguistically-motivated regularization terms onto the model. In experiments described in this paper, for the emission parameters, we want to discourage the model from finding a solution in which all words are equally likely to be generated by any category, so we impose a regularization term on the model to encourage the rows of M to be far apart. The flow models can learn arbitrary transformations over the pretrained context embeddings. Because each token in the corpus has an embedding, the flow models may learn transformations that cue offarbitrary information in those embeddings, effectively making changes to observations. A Euclidean distance penalty is put between the output of the flow transformation g−1(xη) and the input embedding xη to penalize the output drifting too far from the input 2445 embedding. The final objective to maximize is: L(σ) = 1 |σ| |σ| X i=0 log P(σi) + λ1 X d,e ∥δ⊤ d M −δ⊤ e M∥2 −λ2 X η∈σi ∥g−1(xη) −xη∥2, (10) where σ is a minibatch of sentences, a, b, c, d, e are all category labels, λ1 and λ2 are the weights for the two regularization terms and ∥. . . ∥n is the n-norm. 5 Experiments We report results of labeled parsing evaluation and unlabeled parsing evaluation against existing grammar induction and unsupervised parsing models. We evaluate our models on full English (The Penn Treebank; Marcus et al., 1993), Chinese (The Chinese Treebank 5.0; Xia et al., 2000) and German (NEGRA 2.0; Skut et al., 1998) constituency treebanks and the 20-or-fewer-word subsets for labeled parsing performance.3 For unlabeled parsing evaluation, we first report results on a set of languages with complex morphology chosen prior to evaluation. This set includes Czech and Russian, which are fusional languages, Korean and Uyghur, which are agglutinative languages, and Finnish, which has elements of both types. Dependency trees from the Universal Dependency Treebank (Nivre et al., 2016) of these languages are converted into constituency trees (Collins et al., 1999) by keeping constituents that have a single incoming and no outgoing dependency arc. For example, constituents like noun phrases that are kept in conversion may only have one incoming arc from the main verb, and no outgoing arc to any modifier. Each dataset has 15,000 sentences randomly sampled from the dependency treebank (if the treebank has enough sentences), or is augmented with sentences randomly sampled from Wikipedia (if the treebank has fewer sentences). Finally, unlabeled parsing experiments on the three constituency treebanks are reported, one following Jin et al. (2018a) and one following Htut et al. (2018). The hyperparameters of the model for all experiments are tuned on the Brown Corpus portion of the Penn Treebank. We set the number of categories C to 30, the categorical distance constraint strength λ1 to be 0.0001, and the drifting penalty 3WSJ20test is the second half of WSJ20. λ2 to be 10. Function g−1 is set to have 4 coupling layers with q(i) being a feed-forward network with one hidden layer for both NICE and Real NVP, following He et al. (2018). We train the system until the marginal likelihood over the whole training set starts to oscillate, around 10,000 batches for smaller corpora and around 20,000 for larger corpora. Because the inside algorithm is quadratic on the length of the sentences, the batch size for training gets quadratically smaller from 400 to 1 as sentences get longer. We use the Adam optimizer (Kingma and Ba, 2015), initialized with learning rates 0.1 for d and N, and 0.001 for L and parameters in g−1. Means and standard deviations of evaluation metrics are reported in tables with 10 runs of the proposed system. We use ELMo embeddings (Peters et al., 2018) with 1024 dimensions from averaging representations from two BiLSTM layers and the word encoder in ELMo for all languages (Che et al., 2018).4 These embeddings are each trained with 20 million words from Wikipedia and Common Crawl. We initialize d and N with multinomials drawn from a Dirichlet distribution with 0.2 as the concentration parameter, following PCFG induction work with Bayesian models (Jin et al., 2018b). We assign the same diagonal variance matrix to all latent Gaussian distributions, calculated empirically from embeddings from 5000 randomly sampled sentences. M is initialized with the empirical mean of the same sampled embeddings, but with random Gaussian noise added to each row. The parameters of the normalizing flow g−1 are initialized from a uniform distribution with 0 mean and a standard deviation of √1/D. For labeled constituency evaluation, we compare against the state-of-the-art PCFG induction system DIMI (D2K15: depth bounded at 2 and 15 categories; Jin et al., 2018a) which takes word tokens as input and produces labeled trees.5 For unlabeled constituency evaluation, results from other unsupervised systems are used for comparison, including CCL (Seginer, 2007), UPPARSE (Ponvert et al., 2011), PRPN (Shen et al., 2018), as well as systems which use gold part-of-speech tags: DMV+CCM (Klein and Manning, 2002) and UML-DOP (Bod, 2006). 4https://github.com/HIT-SCIR/ ELMoForManyLangs. 5The DB-PCFG system (Jin et al., 2018b) is formally equivalent to the DIMI system. 2446 Model WSJ20test WSJ CTB20 CTB NEGRA20 NEGRA µ(σ) max µ(σ) max µ(σ) max µ(σ) max µ(σ) max µ(σ) max DIMI 23.0(6.5) 34.1 15.4(4.4) 20.7 13.6(1.6) 17.5 this work 22.8(6.0) 24.0 22.2(3.8) 27.0 19.7(1.9) 24.0 13.8(3.4) 20.2 26.2(2.8) 30.4 24.5(2.7) 29.1 Table 1: Recall-V-Measure scores for labeled grammar induction models trained on the listed treebanks with punctuation. For all tables, µ (σ) means the mean (standard deviation) of the reported scores. 5.1 Labeled parsing evaluation Metric: Labeled trees induced by DIMI (Jin et al., 2018a) and the flow-based system are evaluated on six different datasets. In this evaluation, predicted labels of induced constituents that are in gold trees are compared against gold labels of these constituents6 using V-Measure (Rosenberg and Hirschberg, 2007). Recall of the induced trees is used to weight these V-Measure scores. The final Recall-V-Measure (RVM) score is computed as the product of these two measures. RVM can be maximized when gold constituents are included in induced trees and their clustering is consistent with gold annotation. RVM is equal to unlabeled recall when the matching constituents have the same clustering of labels as the gold annotation. Results: Left- and right-branching baselines are constructed by assigning 21 random labels7 to constituents in purely left- and right-branching trees. However, both branching baselines perform poorly in this evaluation, due to the fact that there is no straightforward way to assign labels to constituent spans that may correspond to how gold labels are organized. VM scores for both baselines are close to 0, leading to RVM scores close to 0. Table 1 shows RVM scores for both the DIMI system and the flow-based system. For the labeled grammar induction systems, results show that the flow-based model outperforms DIMI on two of the three test datasets. Table 3 shows only the performance of the systems on bracketing. Although DIMI performs much better than the flow-based system in terms of bracketing F1 on WSJ20test, the flow-based system’s performance on average RVM is much closer to DIMI, which indicates that the flow-based system assigns more consistent labels to constituents than DIMI. On CTB20 and NEGRA20, where the bracketing performance of the flow-based system is better, this system out6The maximal projection category is used when a span is labeled with several categories in the gold annotation. All functional tags are removed. 7There are 21 phrase level tags in the Penn Treebank II tag set. performs DIMI by a large margin on RVM. Also, runs with the highest performance on bracketing are not the highest on RVM in general, showing that for labeled induction models, bracketing accuracy may be traded for labeling accuracy. Confusion matrix: Figure 1 shows the gold constituent recall on NEGRA20 for the two labeled grammar induction systems. We show 5 main phrasal categories in gold annotation and in a run of predicted trees. Grammars from DIMI are prone to category collapse in which only a few categories are active as non-terminals. Figure 1a shows that categories 8 and 3 are the main active categories containing the majority of all constituents, with category 8 covering 78% of all S categories, 23% of NPs, and many others. In Figure 1b, the clear diagonal pattern for the flowbased model shows that the gold categories do have separate corresponding predicted categories. For example, VP is almost exclusively in category 1 if appears in the predicted trees and PP is predominately in category 27. NP has a wider spread across predicted categories, but category 8 is mostly used to represent it. 5.2 Unlabeled parsing evaluation We additionally perform three unlabeled parsing evaluations against baseline systems. The first experiment uses a set of dependency-derived treebanks in morphologically rich languages to examine how morphology is used by the proposed system. The second experiment induces on datasets used in Jin et al. (2018a) and the final experiment uses the WSJ, CTB and NEGRA datasets without any punctuation for evaluation against published results by Htut et al. (2018). Morphologically rich languages: Table 2 shows unlabeled parsing performance on the morphologically rich languages described at the beginning of this section, compared against branching baselines and DIMI. There is a substantial performance improvement observed across all languages when context embeddings are used as ob2447 8 3 5 7 1 Other NotInPred S NP VP PP AP Other 0.78 0.00 0.00 0.00 0.00 0.00 0.21 0.23 0.35 0.01 0.01 0.00 0.00 0.41 0.04 0.02 0.00 0.00 0.00 0.00 0.93 0.13 0.38 0.00 0.02 0.00 0.00 0.47 0.07 0.03 0.01 0.08 0.00 0.00 0.81 0.25 0.09 0.01 0.01 0.00 0.00 0.63 0.0 0.2 0.4 0.6 0.8 1.0 Labeled Constituent Recall (a) the DIMI system. 18 8 1 27 Other 21 NotInPred S NP VP PP AP Other 0.67 0.00 0.01 0.00 0.06 0.07 0.18 0.02 0.29 0.00 0.14 0.11 0.17 0.26 0.01 0.00 0.19 0.00 0.02 0.01 0.77 0.01 0.06 0.02 0.44 0.08 0.08 0.32 0.00 0.01 0.04 0.01 0.36 0.01 0.57 0.16 0.08 0.08 0.02 0.23 0.05 0.39 0.2 0.4 0.6 0.8 1.0 Labeled Constituent Recall (b) the flow-based system. Figure 1: The confusion matrices for DIMI and the flow-based system on the constituents in NEGRA20. The runs with best RVM scores are chosen for plotting. NotInPred means the proportion of gold constituents not in predicted trees. servations. Korean and Uyghur both have very sparse vocabulary, leading to poor performance of the DIMI system. Constituency treebanks: We also compare the flow-based system to published unlabeled parsing results from previous work. Table 3 shows the unlabeled parsing F1 scores for several grammar induction systems on the WSJ20test, CTB20 and NEGRA20 datasets reported in Jin et al. (2018a). Posterior inference on constituents (PIoC) proposed in Jin et al. (2018a) is also used with parse trees from 10 runs of the flow-based system. The flow-based system is able to produce more accurate trees on the CTB20 and NEGRA20 datasets despite not being depth-bounded. However, its performance is subpar on the WSJ20test dataset. Finally, the flow-based model is compared against other unsupervised parsing models on the Lang. LB RB DIMI this work µ (σ) µ (σ) Czech 24.8 50.3 49.3 (8.5) 52.9 (4.7) Finnish 30.5 52.1 49.0 (5.0) 52.5 (5.2) Korean 40.4 20.2 22.6 (2.1) 51.1 (2.6) Russian 45.5 28.7 50.2 (8.1) 58.0 (4.7) Uyghur 45.8 24.6 33.0 (3.2) 54.1 (1.4) Table 2: Unlabeled recall scores on a set of morphologically rich languages for the proposed system, DIMI and the left- and right-branching baselines. System WSJ20test CTB20 NEGRA20 CCL 60.9 37.1 33.7 UPPARSE 43.9 38.2 47.7 DB-PCFG 60.5 DIMI 63.1 38.9 40.8 this work 51.7 43.5 48.2 Table 3: Unlabeled parsing F1 scores for different grammar induction systems trained on only the 20 words or less subsets of the three constituency treebanks as in Jin et al. (2018a). three full constituency treebanks and their 10-orfewer-word subsets, trained with sentences without punctuation in training, following Htut et al. (2018). The results are shown in Table 4. First, the flow-based system performs better than reported results from all systems, using raw text only, on both NEGRA and CTB, showing that the system is able to accurately generate structure. Second, there is a smaller performance gap between the flow-based system and the best-performing one on WSJ than on WSJ10. The fact that the flow-based model underperforms on English may be due to the fact that 10 15 20 25 30 35 Embedding Distance 0 20 Recall Difference Finnish WSJ20test CTB20 Czech Korean Russian Uyghur NEGRA20 Figure 2: Correlation between recall difference of the flow-based system and DIMI and the average distance between ELMo embeddings. 2448 Model WSJ10 WSJ CTB10 CTB NEGRA10 NEGRA µ(σ) max µ(σ) max µ(σ) max µ(σ) max µ(σ) max µ(σ) max CCL 67.3(0.0) 67.3 44.9(0.0) 44.9 47.8(0.0) 47.8 21.1(0.0) 21.1 48.0(0.0) 48.0 27.6(0.0) 27.6 UPPARSE 44.8(0.0) 44.8 23.6(0.0) 23.6 44.7(0.0) 44.7 24.2(0.0) 24.2 53.4(0.0) 53.4 33.4(0.0) 33.4 PRPN-UP 62.2(3.9) 70.3 26.0(2.3) 32.8 PRPN-LM 70.5(0.4) 71.3 37.4(0.3) 38.1 DIMI 49.0(4.8) 55.8 41.1(2.9) 45.9 47.5 (2.7) 54.1 this work 56.0(6.1) 63.6 38.5(3.9) 42.7 49.4(1.3) 50.7 29.2(2.1) 31.9 51.8 (3.1) 58.5 37.1(2.5) 41.2 RB 61.7(0.0) 61.7 39.5(0.0) 39.5 50.4(0.0) 50.4 21.8(0.0) 21.8 43.3(0.0) 43.3 22.8(0.0) 22.8 LB 28.7(0.0) 28.7 11.6(0.0) 11.6 35.8(0.0) 35.8 11.7(0.0) 11.7 35.1(0.0) 35.1 16.9(0.0) 16.9 DMV+CCM 77.6(0.0) 77.6 63.9(0.0) 63.9 UML-DOP 82.9(0.0) 82.9 67.0(0.0) 67.0 Table 4: Unlabeled parsing F1 scores for different constituency grammar induction systems trained on the full set of the treebanks where punctuation is removed from all data in training and evaluation with results reported in Htut et al. (2018). PRPN models train and test on different subsets of the corpora, whereas other models use the full corpora to train and evaluate. All models except DIMI and this work produce unlabeled trees. DMV+CCM and UML-DOP use gold POS tags as observations for induction, listing here for reference. the English vocabulary contains a relatively large number of high frequency words, which makes contexts for words similar, showing up as similarities between the context embeddings for different words. This confuses the model because it relies on the observed embeddings being distinct and representative for induction. Figure 2 shows average Euclidean distances for 50,000 pairs of ELMo embeddings of different words randomly sampled from each dataset. The averaged distance between the embeddings is positively correlated with the gain of the flow-based system over DIMI, indicating the importance of varied contexts for grammar induction. 5.3 Induced interpretable categories PCFG induction systems usually create syntactic categories that correspond to coarse-grained linguistic classes like nouns and verbs using cooccurrence statistics. However the flow-based system also creates classes that are morphological or semantic in nature. The ability of the system to use morphological and semantic information to help grammar induction is shown in Table 5. Grammars induced on Korean from the flowbased system are greatly improved over baselines which use words only as input. Korean is an agglutinative language with many morphemes per token, so approaches that treat tokens as words must address severe sparsity issues. As ELMo embeddings include subword information from Korean characters, they may contain information useful for understanding morphology – the nominative clitics 이or 가and the accusative clitics 을or 를, Cat. Interp. Most common words Korean 3 ADJ 큰(big), 많은(many) 새로운(new), 중요한(important) 11 N-NOM 사람이(person), 문제가(problem) 사람들이(people), 일이(work) 12 N-ACC 사실을(fact), 영향을(influence) 일을(work), 의미를(meaning) German 7 DAT den, dem, einem, diesem, ihren 8 GEN der, des, einer, dieser, seiner, eines 20 NOM/ACC die, das, der, ein, eine, ihre, keine Chinese 1 V-TRANS 提供(provide), 进行(carry out) 举行(hold), 利用(utilize) 14 V-MODAL 要(would like), 会(will) 能(can), 可以(be able) 28 V-SCOMP 说(say), 希望(hope) 认为(think), 指出(point out) Table 5: Analysis of predicted syntactic categories (Cat.) and their interpreted syntactic categories (Interp.) in runs with highest RVM scores for Korean, German and Chinese. The most common words in each predicted category are listed. for example, may encode strong biases towards a word token being a noun along with its case. Categories like 11 and 12 in Table 5 reliably capture nouns in the nominative and accusative cases, respectively, even though in both cases the marking clitic differs depending on whether the noun preceding it ends in a vowel or consonant. Similarly, category 3 shows noun-preceding adjectives, which in Korean are formed by verb stems plus ㄴor 은, and the inducer is again able to cluster words with both endings together. 2449 Model setup RVM µ (σ) max Multi 18.9 (1.6) 21.0 Gauss +Fasttext 17.5 (1.5) 19.4 +ELMo 23.4 (2.0) 26.7 NICE +ELMo 13.9 (4.6) 22.3 +ELMo+sim 25.7 (2.2) 28.7 +ELMo+sim+µDist 26.2 (2.8) 30.3 RNVP +ELMo+sim+µDist 24.1 (3.2) 27.9 Table 6: Parsing performance on the NEGRA20 dataset with different configurations of the model. NICE and RNVP are the NICE and RealNVP models used for modeling emission. Sim and µDist are the similarity penalty and category distance regularizers respectively. For German, the cased articles also have similar endings. The dative articles usually end with -en or -em, and the genitive articles usually end with -er or -es. Having access to the subword information, the flow-based system is able to come up with these distinctions with no supervision, because the cases may provide important clues to relative positions of the following nouns to verbs or prepositions. Contextual information also helps greatly, seen here when the system distinguishes the genitive der in category 8 and the nominative or accusative der in category 20 in the phrases like der(20) P¨achter der(8) Junkerstube (the lessee of the junkerstube). Finally, for languages like Chinese where there are few morphological markings, semantic information may help the system induce syntactic categories. Category 28 is a category of verbs related to cognition and expression, which also characteristically accepts sentential complements (Vendler, 1972; Fisher et al., 1991). Syntactic categories like these are not seen in systems inducing with words only. This indicates that the semantics of these verbs may play a role here, especially since Chinese has no complementizer to signal an upcoming sentential complement. 5.4 Ablation experiments Table 6 shows the ablation and comparison experiments on NEGRA20. ELMo embeddings provide a large performance boost with the Gaussian emission model over both the multinomial emission model, which has no access to contextual and subword information, and the Gaussian emission model with Fasttext embeddings based on character n-grams (Joulin et al., 2016), showing that both context and subword information helps grammar induction. The two linguistically-motivated regularization terms help the flow-based model perform even better. Most notably, the similarity performance helps the flow models greatly by restricting the freedom that the flow models have to change the context embeddings, indicating that the information in context embeddings is valuable for induction. The Real NVP model produces higher data likelihood but its performance is lower than other NICE-based models, indicating that the volume-preserving property of NICE is important for preventing overfitting. 6 Related work Earlier work on PCFG induction (Carroll and Charniak, 1992; Johnson et al., 2007; Liang et al., 2009; Tu, 2012) shows that directly inducing PCFGs from raw text is difficult. Recent work (Shain et al., 2016; Jin et al., 2018b,a) shows that inducing PCFGs from raw text is possible, and cognitive constraints are useful for helping the induction model to find good grammars. Closely related to PCFG induction is the task of unsupervised constituency parsing from raw text where trees are unlabeled. Earlier work by Seginer (2007) and Ponvert et al. (2011) induces unlabeled trees and achieves good results. More recent work (Shen et al., 2018) utilizes complex neural architectures for unsupervised parsing and language modeling and also shows good results on English. Although unlabeled parsing evaluation is common, other work (Bisk and Hockenmaier, 2015) has argued for labeled parsing evaluation for grammar induction. Early unsupervised dependency grammars and part-of-speech induction models (Klein and Manning, 2004; Christodoulopoulos and Steedman, 2010) have been similarly augmented with neural networks and word embeddings (Tran et al., 2016; Jiang et al., 2016). Neural networks provide flexible ways to parameterize distributions, and word embeddings (Mikolov et al., 2013; Pennington et al., 2014) allow these models to use semantic information in these distributed representations. Results show that these improvements produce more accurate dependencies and POS assignments, but these improvements have not been applied to PCFG induction. Normalizing flows have been shown to be powerful models for complex densities (Dinh et al., 2015, 2017; Rezende and Mohamed, 2015; Papa2450 makarios et al., 2017). He et al. (2018) showed improved performance on POS induction and dependency induction by incorporating normalizing flows into baseline models (Klein and Manning, 2004; Lin et al., 2015). 7 Conclusion This work proposes a neural PCFG inducer which employs context embeddings (Peters et al., 2018) in a normalizing flow model (Dinh et al., 2015) to extend PCFG induction to use semantic and morphological information. Linguistically motivated similarity penalty and categorical distance constraints are also imposed on the inducer as regularization. Labeled and unlabeled evaluation shows that the PCFG induction model with normalizing flow and context embeddings produces grammars with state-of-the-art accuracy on a variety of different languages. Results show consistent and meaningful use of labels at phrasal and lexical levels by the flow-based model. Ablation further shows a positive effect of normalizing flow, context embeddings and proposed regularizers. Acknowledgments The authors would like to thank the anonymous reviewers for their helpful comments. Computations for this project were partly run on the Ohio Supercomputer Center (1987). This research was funded by the Defense Advanced Research Projects Agency award HR0011-15-2-0022. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. This work was also supported by the National Science Foundation grant 1816891. All views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation. References Omri Abend, Tom Kwiatkowski, Nathaniel J. Smith, Sharon Goldwater, and Mark Steedman. 2017. Bootstrapping language acquisition. In Cognition, volume 164, pages 116–143. Elsevier B.V. Colin Bannard, Elena Lieven, and Michael Tomasello. 2009. Modeling children’s early grammatical knowledge. Proceedings of the National Academy of Sciences of the United States of America, 106(41):17284–9. Yonatan Bisk and Julia Hockenmaier. 2015. Probing the linguistic strengths and limitations of unsupervised grammar induction. ACL-IJCNLP 2015 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Proceedings of the Conference, 1:1395–1404. Rens Bod. 2006. Unsupervised parsing with U-DOP. In Proceedings of the Conference on Computational Natural Language Learning, pages 85–92. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. Working Notes of the Workshop on Statistically-Based NLP Techniques, (March):1– 13. Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards Better UD Parsing: Deep Contextualized Word Embeddings, Ensemble, and Treebank Concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 55–64, Brussels, Belgium. Association for Computational Linguistics. Christos Christodoulopoulos and Mark Steedman. 2010. Two Decades of Unsupervised POS induction: How far have we come? 2010 Conference on Empirical Methods in Natural Language Processing, (October):575–584. Michael Collins, Lance Ramshaw, Jan Hajiˇc, and Christoph Tillmann. 1999. A Statistical Parser for Czech. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 505–512. Laurent Dinh, David Krueger, and Yoshua Bengio. 2015. NICE: Non-Linear Independent Components Estimation. In ICLR Workshop. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. 2017. Density estimation using Real NVP. ICLR. Cynthia Fisher, Henry Gleitman, and Lila R Gleitman. 1991. On the semantic content of subcategorization frames. Cognitive Psychology, 23(3):331–392. Junxian He, Graham Neubig, and Taylor BergKirkpatrick. 2018. Unsupervised Learning of Syntactic Structure with Invertible Neural Projections. In EMNLP, pages 1292–1302. Association for Computational Linguistics. Phu Mon Htut, Kyunghyun Cho, and Samuel R Bowman. 2018. Grammar Induction with Neural Language Models: An Unusual Replication. In EMNLP, pages 4998–5003. Yong Jiang, Wenjuan Han, and Kewei Tu. 2016. Unsupervised neural dependency parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 61503248, pages 763–771. 2451 Lifeng Jin, Finale Doshi-Velez, Timothy A Miller, William Schuler, and Lane Schwartz. 2018a. Depthbounding is effective: Improvements and evaluation of unsupervised PCFG induction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Lifeng Jin, Finale Doshi-Velez, Timothy A Miller, William Schuler, and Lane Schwartz. 2018b. Unsupervised Grammar Induction with Depth-bounded PCFG. Transactions of the Association for Computational Linguistics. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Bayesian Inference for PCFGs via Markov chain Monte Carlo. Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139–146. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of Tricks for Efficient Text Classification. Diederik P Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Diederik P Kingma and Prafulla Dhariwal. 2018. Glow: Generative Flow with Invertible 1x1 Convolutions. NIPS. Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 128–135. Dan Klein and Christopher D. Manning. 2004. Corpusbased induction of syntactic structure: Models of dependency and constituency. In Proceedings of the Annual Meeting on Association for Computational Linguistics, volume 1, pages 478–485. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - ACL-IJCNLP ’09, volume 1, page 91. Chu-Cheng Lin, Waleed Ammar, Chris Dyer, and Lori Levin. 2015. Unsupervised POS Induction with Word Embeddings. In Proceedings of Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 1311–1316. Brian MacWhinney and Elizabeth Bates. 1993. The Crosslinguistic Study of Sentence Processing. Cambridge University Press, New York. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. CoRR, abs/1301.3:1–12. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher D Manning, Ryan Mcdonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of Language Resources and Evaluation Conference. The Ohio Supercomputer Center. 1987. Ohio Supercomputer Center. \url{http://osc.edu/ark:/19495/f5s1ph73}. George Papamakarios, Theo Pavlakou, and Iain Murray. 2017. Masked Autoregressive Flow for Density Estimation. In Advances in Neural Information Processing Systems, pages 2338–2347. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL. Kim Plunkett and Clair Wood. 2004. The development of children’s understanding of grammar. Cognitive and language development in children. Oxford: Blackwell, pages 163–204. Elias Ponvert, Jason Baldridge, and Katrin Erk. 2011. Simple unsupervised grammar induction from raw text with cascaded finite state models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1077–1086. Danilo Jimenez Rezende and Shakir Mohamed. 2015. Variational Inference with Normalizing Flows. In Proceedings of the 32nd International Conference on Machine Learning. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL). Yoav Seginer. 2007. Fast Unsupervised Incremental Parsing. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, pages 384–391. Cory Shain, William Bryce, Lifeng Jin, Victoria Krakovna, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2016. Memory-bounded left-corner unsupervised grammar induction on child-directed input. In Proceedings of the International Conference on Computational Linguistics, pages 964–975. 2452 Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural Language Modeling by Jointly Learning Syntax and Lexicon. In ICLR. Wojciech Skut, Thorsten Brants, Brigitte Krenn, and Hans Uszkoreit. 1998. A Linguistically Interpreted Corpus of German Newspaper Text. In Proceedings of the ESSLLI Workshop on Recent Advances in Corpus Annotation., page 7. Ke Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, and Kevin Knight. 2016. Unsupervised Neural Hidden Markov Models. In Proceedings of the Workshop on Structured Prediction for NLP. Kewei Tu. 2012. Unsupervised learning of probabilistic grammars. Ph.D. thesis. Zeno Vendler. 1972. Res cogitans: An essay in rational psychology. Fei Xia, Martha Palmer, Nianwen Xue, Mary Ellen Ocurowski, John Kovarik, Fu-Dong Chiou, Shizhe Huang, Tony Kroch, and Mitch Marcus. 2000. Developing Guidelines and Ensuring Consistency for Chinese Text Annotation. In Proceedings of the Second Language Resources and Evaluation Conference.
2019
234
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2453–2463 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2453 Variance of average surprisal: a better predictor for quality of grammar from unsupervised PCFG induction Lifeng Jin and William Schuler Department of Linguistics The Ohio State University, Columbus, OH, USA {jin, schuler}@ling.osu.edu Abstract In unsupervised grammar induction, data likelihood is known to be only weakly correlated with parsing accuracy, especially at convergence after multiple runs. In order to find a better indicator for quality of induced grammars, this paper correlates several linguistically- and psycholinguisticallymotivated predictors to parsing accuracy on a large multilingual grammar induction evaluation data set. Results show that variance of average surprisal (VAS) better correlates with parsing accuracy than data likelihood, and that using VAS instead of data likelihood for model selection provides a significant accuracy boost. Further evidence shows VAS to be a better candidate than data likelihood for predicting word order typology classification. Analyses show that VAS seems to separate content words from function words in natural language grammars, and to better arrange words with different frequencies into separate classes that are more consistent with linguistic theory. 1 Introduction Unsupervised grammar induction models learn to produce hierarchical structures for strings of words. Previous work (Seginer, 2007; Ponvert et al., 2011; Shain et al., 2016; Jin et al., 2018b) show that using data likelihood as both the objective for optimization and the criterion for model selection, either implicitly (in the case of Bayesian models) or explicitly (in the case of EM), gives good results on grammar induction. However, it is also known that data likelihood is only weakly correlated with parsing accuracy, especially at convergence (Smith, 2006; Johnson et al., 2007; Jin et al., 2018a). This weak correlation points to the fact that the maximization of data likelihood at convergence may be non-optimal for model selection, and this non-optimality indicates other constraints on learning may be at work in human acquisition. In this work, several linguistically- and psycholinguistically-motivated constraints related to syntax are explored as predictors of parsing accuracy for grammars learned by unsupervised induction (Jin et al., 2018a). Results show that variance of average surprisal (VAS) is better correlated with parsing accuracy of induced grammars than data likelihood. Using VAS for model selection at convergence also produces significantly higher parsing accuracy. Further evidence shows VAS to be a better candidate than data likelihood for predicting word order typology classification. Analyses show that VAS seems to separate content words from function words in natural language grammars, and seems to better arrange words with different frequencies into separate classes that are more consistent with linguistic theory. 2 Related work Induction of PCFGs has previously been considered a difficult problem (Carroll and Charniak, 1992; Johnson et al., 2007; Liang et al., 2009; Tu, 2012). Earlier work attributed the lack of success for induction to a lack of correlation between parsing accuracy and data likelihood (Johnson et al., 2007), or to the likelihood function or the posterior being filled with weak local optima (Liang et al., 2009; Gimpel and Smith, 2012). Later work has shown that it is possible to induce PCFGs with useful labels from words alone (Shain et al., 2016; Jin et al., 2018b,a). Induction models of constituency grammars or trees usually use data likelihood as both the objective and the model selection criterion (Seginer, 2007; Johnson et al., 2007; Ponvert et al., 2011; Shen et al., 2018), but the weak correlation between data likelihood and parsing accuracy hints at the non-optimality of this practice (Smith, 2006; Headden et al., 2009; Jin 2454 et al., 2018a). On the other hand, many linguistic and psycholinguistic theories propose constraints either as properties of natural language grammar or as constraints on human processing and acquisition. Chomsky (1965) proposes that grammars should favor fewer rules, which may be trimmed by the generalizability of the rules (Yang, 2017). Dryer (1992) argues that grammars with certain constituent ordering should produce trees with consistent branching tendencies, which is in contrast to theories that attribute constituent ordering to processing (Hawkins, 1994; Gibson, 1998). Rajkumar et al. (2016) and Jin et al. (2018b) show that grammars should generally control the maximal allowed stack depth. Yang (2013) observes that rules in a natural language grammar follow Zipf’s law, just like words. Grammars may also contribute to the observation that the likelihood of each sentence tends to decrease as a monologue goes on (Keller, 2004; Levy and Jaeger, 2007). 3 Predictors Motivated by these constraints, six accuracy predictors — data likelihood, right-branching score, rule complexity, average stack depth, Zipf likelihood ratio and variance of average surprisal — are evaluated as predictors of parsing accuracy over grammars from multiple runs of a PCFG inducer (Jin et al., 2018a). Variance of average surprisal, Zipf likelihood ratio and data likelihood are defined on the PCFG itself, and the other three are defined on Viterbi parses produced by the PCFG on the corpus. Data likelihood One of the most common induction and model selection criteria is data likelihood. Data likelihood (LL) refers to the marginal likelihood of a corpus given a PCFG, marginalizing out all trees: LL = P(σ; G) = X τ∈T P(σ, τ; G), (1) where σ is a corpus and T is all possible parse trees generated by a grammar G for σ. As it is usually the optimization objective, likelihood should be positively correlated with parsing accuracy at convergence. Right-branching score Branching Direction Theory (Dryer, 1992) explains different patterns of word order among languages. It distinguishes ‘verb patterners,’ which are non-phrasal lexical categories, from ‘object patterners,’ which are phrasal categories. It predicts that VO languages tend towards rightbranching structures and OV languages tend towards left-branching structures. Let |cright →a b| be the number of right children of a parent expanding into two non-terminal categories in all parse trees, and |c∗→a b| be the total number of nodes that expand into two non-terminal categories, then RBS = |cright →a b| |c∗→a b| (2) is the right branching score of the parse trees. A purely right-branching set of binary-branching trees yields an RBS of 1.0, and a purely leftbranching set of binary-branching trees yields an RBS of 0.0. Previous work shows that rightbranching baselines are accurate for a few languages (Seginer, 2007). BDT predicts that different word orders favor different branching directions. Rule complexity One of the evaluation metrics used in the generative linguistics tradition is the complexity of a grammar (Chomsky, 1965). Often the number of rules is used as a proxy measurement of how complex a proposed grammatical analysis is against some other reference grammatical analysis. According to this theory, fewer unique rules present in the Viterbi parses would indicate higher grammar quality. Average stack depth Embedding depth is a known limiting factor to human sentence processing (Chomsky and Miller, 1963; Wu, 2010; Rajkumar et al., 2016), and is shown to benefit unsupervised grammar induction (Noji and Johnson, 2016; Jin et al., 2018b). It is also evaluated in this work as a predictor of parsing accuracy, defined as the expected number of stack elements per sentence in a left-corner parser for the Viterbi parses. Theories such as that of Chomsky and Miller (1963) predict it to correlate negatively with parsing accuracy. Zipf likelihood ratio The distribution of words in a corpus is known to follow Zipf’s law (Zipf, 1935), in which the frequency of a word is inversely proportional to its frequency rank. Counts of syntactic rules in annotated corpora also follow this law (Yang, 2013). 2455 Motivated by this observation, experiments in this work also evaluate expected counts of all possible rules, and compute the ratio (Zipf R) between the likelihood that the rules are generated by a power law model and the likelihood that they are generated by a lognormal model of which the mean µ must be positive (Clauset et al., 2009). The higher the ratio, the better fit the power law model provides to the rule counts. Zipfian observations predict this ratio should be positively correlated with parsing accuracy. Variance of average surprisal Finally, languages may have other interesting properties that are not identified by maximizing the likelihood of the corpus. For example, languages often distinguish function words from content words and assign them distinct categories. If grammars assign very small sets of high frequency words to a few function-word-like categories, this will increase the difference in likelihood between sentences consisting of mostly these function words and sentences with more modifiers and other content words. The magnitude of this difference can be measured using variance of average sentential surprisal (VAS): VAS = 1 N N X i=1  log P(σi) |σi| −1 N N X j=1 log P(σ j) |σ j|  2 (3) where N is the number of sentences in the corpus, and σi is the i-th sentence. Because sentences in larger corpora contain different numbers of function words, VAS is predicted to be high when the distinction between predicted function words and predicted content words in the induced grammar aligns with human judgments, indicating that VAS should be positively correlated with parsing accuracy. 4 Dataset The grammar accuracy predictors described above are evaluated on multiple languages using corpora annotated with constituents (Xia et al., 2000; Marcus et al., 1993; Alastair et al., 2018) and corpora annotated with dependencies (Nivre et al., 2016) which are converted to constituents (Collins et al., 1999). An example is shown in Figure 1. These evaluations use corpora with at least 2,000 annotated sentences, excluding all sentences with nonprojective dependency graphs. Each induction run uses approximately 15,000 sentences randomly sampled from each language corpus. Languages with fewer than 15,000 annotated sentences are augmented with sentences sampled from Wikipedia (Zeman et al., 2017). Evaluations initially screen predictors on a development partition consisting of 12 languages from 12 language subgroups covering language families including Indo-European, Uralic, Korean, Turkic, Sino-Tibetan and Afro-Asiatic. Significance tests use a separate test partition consisting of 25 languages1 which are different from the development partition, covering additional Japanese, Austronesian and Austro-Asiatic language families. 5 Model These evaluations use the Bayesian PCFG induction model from Jin et al. (2018a),2 the objective function of which can be considered to be data likelihood.3 However, the results for model selection reported in this paper are endemic neither to PCFG induction nor to the objective function used in induction. These experiments can be done with PCFGs randomly sampled from any distribution, but the fact that maximizing data likelihood as the objective can give better models than arbitrary random models ensures that evaluations are tractable and meaningful. This model defines a Chomsky normal form (CNF) PCFG as a matrix G of binary rule probabilities which is first drawn from the Dirichlet prior with a concentration parameter β: G ∼Dirichlet(β) (4) Trees for sentences 1..N are then generated by drawing from a PCFG: τ1..N ∼PCFG(G) (5) Specifically, each tree τ is a set {τϵ, τ1, τ2, τ11, τ12, τ21, ...} of category node labels τη where η ∈{1, 2}∗defines a path of left or right branches from the root to that node. Category labels for every pair of left and right children τη1, τη2 are drawn from a multinomial 1Portuguese in the test partition refers to Brazilian Portuguese. Nynorsk and Bokmål are two varieties of Norwegian. 2https://github.com/lifengjin/dimi_emnlp18. 3Bayesian models usually have no objective function, but in inference the parameters will drift towards one of the modes, which may appear to be optimized for data likelihood. 2456 In Danish , the word may even apply to shallow lagoons . ADP PROPN PUNCT DET NOUN AUX ADV VERB ADP ADJ NOUN PUNCT case obl punct det nsubj aux advmod root case amod obl punct (a) The dependency graph for the example sentence from the English Universal Dependency Treebank. X PUNCT . X NOUN lagoons ADJ shallow ADP to VERB apply ADV even AUX may X NOUN word DET the PUNCT , X PROPN Danish ADP In (b) The constituency tree converted from the dependency graph. Only the constituents where there is a single incoming dependency relation are kept. The three created constituents correspond to two PPs and one NP. They are labeled with X. Figure 1: Examples of a dependency graph and the converted constituent tree for the sentence In Danish, the word may even apply to shallow lagoons. distribution defined by the grammar G and the category of the parent τη: τη1, τη2 ∼Multinomial(δτη ⊤G) (6) where δx is a Kronecker delta function equal to 1 at value x and 0 elsewhere, and terminals have null expansions PG(a b | w) = PG(a b | ⊥) = ⟦a, b=⊥, ⊥⟧for w ∈W.4 In inference, the conditional posteriors are calculated with a chart sampler (Johnson et al., 2007), and Gibbs sampling is used to draw samples of grammars and parse trees from the true posteriors. For example, at iteration t of Gibbs sampling: Gt ∼P(Gt | τt−1 1..N, στt−1 1..N, β) (7) τt 1...N ∼P(τt 1..N | Gt, στt 1..N) (8) where στ denotes the terminals in τ. The inference procedure naturally produces sampled parses of a sentence, and the Viterbi parse of a sentence given an induced PCFG can be obtained by running the Viterbi algorithm with the grammar on the sentence. 6 Experiments An exploratory evaluation on the 12-language development partition described in Section 4 measures the effectiveness of the proposed predictors 4Here, ⟦· · ·⟧is an indicator function. in order to narrow the number of possible candidates prior to significance testing. A confirmatory evaluation on the 25-language test partition with significance testing is performed with the predictors that are found to be effective in the exploratory evaluation. Following Jin et al. (2018a), the concentration parameter of the Dirichlet priors is set to 0.2 for all languages. The number of syntactic categories C is set to 30 to allow the model to explore more complex syntactic structures. 30 random seeds are used for initialization of the model parameters, creating 30 runs for each language. The embedding depth of the induced grammars is not bounded in any run. All runs are stopped at iteration 700 which has been observed to have stable likelihood for at least 200 iterations (Jin et al., 2018a). A sampled grammar and Viterbi parse from the end of each run are used for predictor value calculation. Recall is used as the parsing accuracy metric for recovery of attested constituents. 7 Results 7.1 Development results Correlation study Columns two through seven in Table 1 show the correlation coefficients (Pearson’s ρ) between all the proposed predictors and the recall of the Viterbi parses of the development partition. Coef2457 0 20 Top 1 diff Korean Chinese English Turkish Bulgarian Spanish Catalan Arabic Finnish Hindi Czech Nynorsk 0 10 20 Top 5 diff Figure 2: Recall difference between the run with the highest VAS and the highest likelihood as well as the difference between the average recall of the runs with the top 5 highest VAS and the top 5 highest likelihood on the development partition. Blue indicates that recall of the highest VAS runs is higher, and red indicates it is lower than the highest likelihood runs. ficients higher than 0.45 or lower than –0.45 are considered substantially predictive and reported in the table. Coefficients are averaged across reported languages. Variance of average surprisal (VAS) has the highest correlation coefficients among all the predictors with the highest average correlation coefficient of 0.627. Data likelihood (LL), which is the most common metric for optimization and model selection in grammar induction, is the second best predictor. It also has a high average correlation coefficient of 0.588.5 Right-branching score also is substantially predictive of recall, but two of the languages have a negative coefficient, making it difficult to use as a model selection criterion without prior knowledge about the branching tendency of a language. Rule complexity, average stack depth as well as Zipf likelihood ratio all show up as predictive, but the signs of the coefficients are similarly inconsistent. Also, the signs of rule complexity are mostly positive, indicating that grammars should maintain a certain minimum level of complexity. Parsing accuracy and model selection The rightmost columns in Table 1 show parsing results on the development partition. The oracle recall is the highest recall obtained with 30 runs and the baseline reports whichever one of the leftbranching baseline or the right-branching baseline 5Correlation coefficients using Kendall’s τ are similar: on the development partition, the average τ is 0.27 for likelihood and 0.33 for VAS. On the test partition the average τ is 0.07 for likelihood and 0.24 for VAS. 25 0 25 Top 1 diff Slovenian Ukrainian Latvian Hebrew Uyghur Polish Russian Estonian Greek Persian Romanian Slovak Urdu Indonesian Swedish Croatian Bokmaal Danish Basque Portuguese French Vietnamese Dutch Japanese Italian 0 10 Top 5 diff Figure 3: Recall difference between the run with the highest VAS and the highest likelihood as well as the difference between the average recall of the runs with the top 5 highest VAS and the top 5 highest likelihood on the test languages. Blue indicates that recall of the highest VAS runs is higher, and red indicates it is lower, than the highest likelihood runs. has the highest recall, marked by L or R. The VAS and LL columns in Table 1 show the parsing accuracy of the runs chosen by VAS and likelihood and Figure 2 shows the difference in recall. Positive difference shows that the run chosen with VAS is more accurate, and negative difference shows that LL is more accurate. Using VAS as the model selection criterion provides on average 3.19 points of recall gain. Recall gain from Nynorsk seems to be a fairly large outlier, but the positive gains from other languages are also larger than the negative gains. Figure 2 also shows the difference of average recall between the runs with the top 5 highest VAS and likelihood. There are still larger positive differences than negative differences, suggesting that VAS more strongly correlates with recall. 7.2 Test results Parsing accuracy and model selection In order to reduce the need for multiple trials correction, evaluations on the test partition only examine surprisal variance and data likelihood. The VAS and LL columns in Table 2 show the parsing accuracy of the runs chosen by VAS and likelihood on the test partition, and Figure 3 shows 2458 Language Correlation coefficients Recall Zipf R Stack depth RBS Rule comp LL VAS Baseline LL VAS Oracle Arabic 0.604 0.499 0.559 43.94 R 50.84 51.39 57.35 Bulgarian –0.807 0.722 55.28 R 70.65 70.46 70.65 Catalan –0.772 0.603 0.608 0.770 41.13 R 63.09 63.20 63.48 Chinese 0.532 29.19 R 42.39 39.88 42.39 Czech –0.517 0.605 0.503 50.26 R 55.63 62.88 62.88 English –0.540 0.554 0.549 0.689 0.673 44.74 R 62.50 61.11 65.57 Finnish 0.491 –0.700 0.854 52.13 R 46.27 51.16 54.16 Hindi 0.539 30.12 L 38.23 45.10 54.27 Korean –0.545 0.868 –0.783 0.915 40.38 R 24.74 21.15 29.78 Nynorsk 0.576 0.677 55.40 R 41.46 68.10 68.20 Spanish 0.583 46.35 R 53.83 53.83 65.94 Turkish –0.593 0.785 –0.954 0.512 45.54 L 33.94 33.61 47.02 Average –0.445 0.103 0.207 0.365 0.588 0.627 44.54 48.63 51.82 56.81 Table 1: Correlation coefficients (Pearson’s ρ) between recall at convergence and the proposed predictors on the languages in the development partition as well as recall from baselines and runs chosen with various model selection methods. Coefficients that are higher than 0.45 or lower than –0.45 are reported in table. Coefficients are averaged across reported languages. For recall, baseline shows recall from whichever one in left-branching baseline and right-branching baseline produces a higher recall. The direction of branching is marked by L or R. Oracle recall is from the oracle best run, and LL and VAS show recall from the run with the highest LL and highest VAS. The best run among the baseline, LL and VAS is boldfaced. Language Baseline LL VAS Oracle Basque 42.21 L 41.02 53.31 59.92 Bokmål 57.75 R 58.94 69.28 70.52 Croatian 47.43 R 50.97 60.04 60.04 Danish 55.30 R 58.91 69.84 69.84 Dutch 49.35 R 46.55 68.73 68.73 Estonian 48.08 R 56.91 56.71 56.91 French 42.22 R 47.25 60.75 63.09 Greek 49.62 R 60.87 56.41 64.66 Hebrew 43.52 R 60.88 60.88 65.20 Indonesian 50.37 R 50.90 57.27 57.27 Italian 52.98 R 38.39 68.91 70.61 Japanese 40.13 L 21.01 44.04 46.80 Latvian 51.67 R 58.86 47.67 58.86 Persian 24.40 R 38.50 38.50 42.22 Polish 70.33 R 76.76 73.89 78.27 Portuguese 45.32 R 51.41 64.00 65.31 Romanian 47.61 R 61.48 61.48 61.48 Russian 50.45 R 61.78 59.62 61.78 Slovak 64.83 R 72.49 72.49 72.78 Slovenian 54.54 R 67.23 36.02 69.35 Swedish 53.77 R 60.25 68.92 68.92 Ukrainian 51.88 R 60.32 45.19 60.32 Urdu 29.62 L 31.33 34.11 42.65 Uyghur 45.77 L 35.55 29.41 48.88 Vietnamese 55.41 R 43.55 59.74 59.74 Average 48.98 52.66 56.69 61.77 Table 2: Parsing accuracy for languages in the test partition. See the caption of Table 1 for the description of the columns. the difference in recall for top 1 and top 5 runs. The patterns are similar to the ones on the development set. Using VAS as the model selection criterion with the top 1 runs provides on average 4.03 points of recall gain. Table 3 shows correlation coefficients for LL and VAS on languages in the test partition. Again the observed pattern is similar, if not more extreme, to what is seen on the development partition. The magnitude of the coefficients is consistent with findings in the development partition. Except for Basque, the sign for VAS-recall correlation is consistently positive, confirming that it is reliable to use VAS for model selection. Confirmatory significance testing is performed on two sets of 25,000 randomly sampled parses from the runs with highest likelihood and highest VAS on all test languages. The parses are randomly permuted between the two sets, and the difference in recall between the two sets is measured. This permutation test shows that the average 4.03 recall gain in Table 2 is highly unlikely to be due to chance (p < 0.0001), showing that VAS produces significantly more accurate grammars in model selection than using likelihood. 7.3 Word-order typology prediction If VAS is much more highly correlated to parsing accuracy than previous predictors, it is possible to use it as an unsupervised proxy to parsing accuracy. Branching Decision Theory (Dryer, 1992) predicts that VO languages favor right-branching structures and OV languages favor left-branching structures. This prediction can be evaluated by correlating VAS and RBS, and using the sign of the correlation coefficient as the word-order pre2459 Lang. LL VAS Lang. LL VAS Basque –0.578 Latvian Bokmål 0.603 Persian 0.462 Croatian 0.615 Polish Danish 0.551 Portuguese 0.484 Dutch 0.740 Romanian 0.644 Estonian 0.698 0.686 Russian 0.682 French 0.715 Slovak 0.522 Greek 0.452 Slovenian Hebrew 0.600 0.667 Swedish 0.803 Indonesian Ukrainian Italian 0.481 Urdu Japanese 0.627 Uyghur Vietnamese –0.458 Average 0.280 0.539 Table 3: Correlation coefficients between recall at convergence and the proposed predictors on the test partition. See the caption of Table 1 for the description of the columns. diction. This tests if grammars following the branching tendency predicted by the theory should have higher parsing accuracy. Table 4 shows results for the VAS-RBS correlation reported along with a few baselines, including a uniform baseline, a majority baseline (where there is oracle knowledge about the data set that the majority of languages is VO), the LL-RBS correlation baseline (where data likelihood is used as the proxy for recall), as well as the recall-RBS oracle performance. There are 29 VO languages and 7 OV languages in the data set (Dryer, 2011).6 Macro F1 is reported for all systems here as the population distribution of OV and VO languages in the world is almost uniform (Dryer, 1992). First, as predicted by BDT, using signs of the correlation between recall and right-branching score yields the best macro F1 score. Second, using VAS as a proxy of recall yields a much higher F score than all the other baselines, including likelihood. In fact, likelihood performs the worst of all the baselines. This result shows again that the correlation between VAS and parsing accuracy is stronger than likelihood at convergence, and this tighter correlation can be useful in other unsupervised tasks. 8 Discussion Positive effects for predictors other than data likelihood suggest that natural language grammars are not optimally learned to explain sentence forms, but may additionally reflect biological constraints 6Dutch has no dominant VO-OV order. Model Gold VO Gold OV Macro-f Right Wrong Right Wrong Uniform 14.5 14.5 3.5 3.5 44.5 Majority 29 0 0 7 44.8 LL 11 18 5 2 42.9 VAS 19 10 7 0 69.2 Recall 27 2 6 1 87.4 Table 4: The macro-F1 scores for the task of predicting the word order of a language. on grammar learning. In particular, the success of VAS may point to a bias toward a function/content distinction in natural language grammars, with common words more likely to form distinctive categories in human learners than co-occurrence statistics would suggest. This bias would produce the observed result that sentences containing more function words have higher per-word probabilities than sentences containing more content words and the existence of such a distinction may give rise to higher surprisal variance. In contrast, a lack of such bias would allow common words to mix with rare words, yielding more uniform probabilities and low surprisal variance, contrary to observations of conditions under which recall is maximized. The fact that simple maximization of data likelihood appears to favor the more uniform response suggests it is not a sufficient model of grammar learning. We first evaluate this hypothesis by examining the ratio between content and function words across sentences to determine whether this ratio is constant in a language. We use the Wall Street Journal portion of the Penn Treebank as the target corpus,7 and calculate the ratio of function to content words in all sentences, and examine the density of the ratio in terms of sentence count and its relationship with sentence length. The left figure in Figure 4 shows the relation between the function-content word ratio and sentence count. The function-content word ratio has a mode at around 0.7, but the count pass is also widely distributed mostly within the range between 0 and 1. This shows that the ratio between content and function words in a language does not appear to be constant. The right figure in Figure 4 shows the relationship between the function-content word ra7We consider words with part-of-speech tags like CC, DT, IN, MD, PDT, RP, TO, PRP, PRP$, WDT, WP, WP$, WRB and UH as function words, and words with POS tags like JJ, JJR, JJS, NN, NNS, NNP, NNPS, RB, RBR, RBS, VB, VBD, VBG, VBN, VBP, VBZ and FW as content words. 2460 0.0 0.5 1.0 1.5 Function/Content 0 2000 4000 Sentence count 0.0 0.5 1.0 1.5 Function/Content 0 50 100 Sentence length Figure 4: Left: the relationship between sentence count and the ratio between content and function words. Right: the relationship between sentence length and the ratio in the Wall Street Journal part of the Penn Treebank. 0 1 2 3 4 High VAS vs. high LL Bulgarian English Japanese French Russian Czech 0 1 2 3 4 High vs. low VAS Figure 5: Left: Ratio of number of high joint probability words in the grammars from runs with highest VAS vs. the highest likelihood. Right: Ratio of number of high joint probability words in the grammars from runs with highest VAS vs. the lowest VAS. tio to sentence length. The ratio seems to converge to 0.7 as the sentence gets longer, but the majority of the sentences in the corpus are below 50 words, and the spread of function-content word ratio for sentences with shorter lengths is also very wide. In many languages, the words with highest frequencies are usually closed class words, such as prepositions and determiners, and these words typically split away from other major classes and form their own classes, raising their probabilities. Low frequency words, on the other hand, tend to move from smaller classes into larger classes, and thus lower their probabilities. It is known that low frequency words, especially hapax legomena, are usually open class words like nouns or adjectives. To reassign these words into larger classes may help them find a natural home where the majority is of the same class as the rare words. This strategy helps better assign words to syntactic classes, which in turn helps create syntactic rules which better align with human annotations. The claim that VAS promotes a distinction between function and content words can be evaluated by comparing joint probabilities of the most frequent words in each language and their most common class in grammars from runs with highest VAS, lowest VAS and highest likelihood. In each case, if the most frequent words have higher probabilities in the high VAS run, this may suggest VAS is correlated with function-content distinctions. Figure 5 shows the top 50 most frequent words in 6 different languages with substantial correlations between VAS and recall. The left figure shows the fraction of words in the run with the highest VAS that have joint probabilities of words and their generating categories higher than in the run with the highest likelihood (i.e. words that have higher probabilities in VASselected grammars than likelihood-selected grammars). The right figure shows the fraction of words in the run with the highest VAS that have joint probabilities higher than in the run with the lowest VAS (i.e. words that have higher probabilities in VAS-selected grammars than in VAS-dispreferred grammars). For all six languages, the ratio of words with higher joint probability is larger than 1, meaning that frequent words in the run with the highest VAS are assigned to classes with higher joint probabilities than words in the run with the highest likelihood or the run with the lowest VAS, consistent with the hypothesis that VAS promotes a distinction between function and content words. Probabilities for some example words are shown in Figure 6. A different explanation may be considered that information content in a sentence is higher when the sentence is longer (Keller, 2004), and when VAS is maximized, grammars that produce uniform information content across different sentence length are disfavored. For example, punctuation contributes more to the likelihood of short sentences than to long sentences. Assigning high probabilities to punctuation may create the result of sentence likelihood co-varying with sentence length. For a grammar to conform to this rule may help it produce structures more in line with hu2461 the of a in for English 0.000 0.005 0.010 0.015 0.020 0.025 Joint probability le à en un pour French High VAS High LL Low VAS Figure 6: Example high frequency words from the highest VAS, the highest likelihood and the lowest VAS runs in English and French. 0 10 20 30 40 50 Sentence length 2 3 4 5 Average Surprisal High VAS Low VAS Figure 7: The distribution of VAS values across sentences of different lengths in the highest VAS run and the lowest VAS run for English. The correlations between VAS and sentence length in both runs are insignificant. man annotations in the data set. Figure 7 shows the distribution of VAS plotted against sentence length. The regression lines for both the highest VAS and lowest VAS cases show a flat slope indicating the correlation between VAS and sentence length is not substantial, which is supported by correlation testing with Kendall’s τ test between sentence length and VAS in the high VAS run (τ = −0.01, p = 0.41) and in the low VAS run (τ = −0.02, p = 0.28). This shows that the effectiveness of VAS cannot be explained by the hypothesis that it guides the grammar to generate syntactic structures by shaping the sentential information content to co-vary with sentence length. 9 Conclusion This work explores the non-optimality of data likelihood for model selection in unsupervised grammar induction. Experiments with several linguistically- and psycholinguistically-motivated predictors on a large multilingual data set show that variance of average surprisal (VAS) is highly predictive of parsing performance. Using it as the criterion for model selection outperforms data likelihood significantly. Further evidence shows VAS to be a better candidate than data likelihood for predicting word-order typology. Analyses show that VAS seems to separate content words from function words in natural language grammars and better arrange words with different frequencies into different classes that are more consistent with these linguistic distinctions. Acknowledgments The authors would like to thank the anonymous reviewers for their helpful comments. Computations for this project were partly run on the Ohio Supercomputer Center (1987). This research was partially funded by the Defense Advanced Research Projects Agency award HR0011-15-2-0022. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. This work was also supported by the National Science Foundation grant 1816891. All views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation. References Butler Alastair, Kei Yoshimoto, Shota Hiyama, Stephen Wright Horn, Iku Nagasaki, and Ai Kubota. 2018. The Keyaki Treebank Parsed Corpus. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. Working Notes of the Workshop on Statistically-Based NLP Techniques, (March):1– 13. Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA. Noam Chomsky and George A Miller. 1963. Introduction to the formal analysis of natural languages. In Handbook of Mathematical Psychology, pages 269– 321. Wiley, New York, NY. Aaron Clauset, Cosma Rohilla Shalizi, and M. E. J. Newman. 2009. Power-Law Distributions in Empirical Data. SIAM Review, 51(4):661–703. Michael Collins, Lance Ramshaw, Jan Hajiˇc, and Christoph Tillmann. 1999. A Statistical Parser for Czech. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 505–512. Matthew S Dryer. 1992. The Greenbergian Word Order Correlations. Language, 68(1):81–138. 2462 Matthew S Dryer. 2011. The evidence for word order correlations. Linguistic Typology, 15(2):335–380. Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. Cognition, 68(1):1–76. Kevin Gimpel and Noah A Smith. 2012. Concavity and Initialization for Unsupervised Dependency Parsing. In NAACL, pages 577–581. John A Hawkins. 1994. A performance theory of order and constituency. Cambridge University Press, Cambridge, U.K. William P. Headden, III, Mark Johnson, and David McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 101–109. Lifeng Jin, Finale Doshi-Velez, Timothy A Miller, William Schuler, and Lane Schwartz. 2018a. Depthbounding is effective: Improvements and evaluation of unsupervised PCFG induction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Lifeng Jin, Finale Doshi-Velez, Timothy A Miller, William Schuler, and Lane Schwartz. 2018b. Unsupervised Grammar Induction with Depth-bounded PCFG. Transactions of the Association for Computational Linguistics. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Bayesian Inference for PCFGs via Markov chain Monte Carlo. Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139–146. Frank Keller. 2004. The Entropy Rate Principle as a Predictor of Processing Effort : An Evaluation against Eye-tracking Data. Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 317–324. Roger Levy and Florian T. Jaeger. 2007. Speakers optimize information density through syntactic reduction. Advances in Neural Information Processing Systems. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - ACL-IJCNLP ’09, volume 1, page 91. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher D Manning, Ryan Mcdonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of Language Resources and Evaluation Conference. Hiroshi Noji and Mark Johnson. 2016. Using Leftcorner Parsing to Encode Universal Structural Constraints in Grammar Induction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 33–43. The Ohio Supercomputer Center. 1987. Ohio Supercomputer Center. \url{http://osc.edu/ark:/19495/f5s1ph73}. Elias Ponvert, Jason Baldridge, and Katrin Erk. 2011. Simple unsupervised grammar induction from raw text with cascaded finite state models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1077–1086. Rajakrishnan Rajkumar, Marten Van Schijndel, Michael White, and William Schuler. 2016. Investigating locality effects and surprisal in written English syntactic choice phenomena. Cognition, 155:204–232. Yoav Seginer. 2007. Fast Unsupervised Incremental Parsing. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, pages 384–391. Cory Shain, William Bryce, Lifeng Jin, Victoria Krakovna, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2016. Memory-bounded left-corner unsupervised grammar induction on child-directed input. In Proceedings of the International Conference on Computational Linguistics, pages 964–975. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural Language Modeling by Jointly Learning Syntax and Lexicon. In ICLR. Noah Ashton Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text. PhD Thesis, pages 1–228. Kewei Tu. 2012. Unsupervised learning of probabilistic grammars. Ph.D. thesis. Stephen Wu. 2010. Complexity Metrics in an Incremental Right-corner Parser. In Proceedings of the North American Association for Computational Linguistics. Fei Xia, Martha Palmer, Nianwen Xue, Mary Ellen Ocurowski, John Kovarik, Fu-Dong Chiou, Shizhe Huang, Tony Kroch, and Mitch Marcus. 2000. Developing Guidelines and Ensuring Consistency for Chinese Text Annotation. In Proceedings of the Second Language Resources and Evaluation Conference. 2463 Charles Yang. 2013. Who s Afraid of George Kingsley Zipf ? Significance, 10(6):29–34. Charles Yang. 2017. Rage against the machine: Evaluation metrics in the 21st century. Language Acquisition, 24(2):100–125. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, Vclava Kettnerov´a, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missil¨a, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria DePaiva, Kira Droganova, Hctor Mart´ınez Alonso, ar C¸ ¨oltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadov´a, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, volume 17, pages 1–19. George K. Zipf. 1935. The Psychobiology of Language. Houghton-Mifflin.
2019
235
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2464–2474 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2464 Cross-Domain NER using Cross-Domain Language Modeling Chen Jia†‡ , Xiaobo Liang3∗and Yue Zhang‡§ †Fudan University, China ‡School of Engineering, Westlake University, China 3Natural Language Processing Lab, Northeastern University, China §Institute of Advanced Technology, Westlake Institute for Advanced Study {jiachen,zhangyue}@westlake.edu.cn, [email protected] Abstract Due to limitation of labeled resources, crossdomain named entity recognition (NER) has been a challenging task. Most existing work considers a supervised setting, making use of labeled data for both the source and target domains. A disadvantage of such methods is that they cannot train for domains without NER data. To address this issue, we consider using cross-domain LM as a bridge cross-domains for NER domain adaptation, performing crossdomain and cross-task knowledge transfer by designing a novel parameter generation network. Results show that our method can effectively extract domain differences from crossdomain LM contrast, allowing unsupervised domain adaptation while also giving state-ofthe-art results among supervised domain adaptation methods. 1 Introduction Named entity recognition (NER) is a fundamental task in information extraction and text understanding. Due to large variations in entity names and flexibility in entity mentions, NER has been a challenging task in NLP. Cross-domain NER adds to the difficulty of modeling due to the difference in text genre and entity names. Existing methods make use of feature transfer (Daum´e III, 2009; Kim et al., 2015; Obeidat et al., 2016; Wang et al., 2018) and parameters sharing (Lee et al., 2017; Sachan et al., 2018; Yang et al., 2017; Lin and Lu, 2018) for supervised NER domain adaptation. Language modeling (LM) has been shown useful for NER, both via multi-task learning (Rei, 2017) and via pre-training (Peters et al., 2018). Intuitively, both noun entities and context patterns can be captured during LM training, which benefits the recognition of named entities. A natural question that arises is whether cross-domain ∗Work done when visiting Westlake University. W News Domain Target Domain NER Task LM Task ner src,  ner , tgt  lm , tgt  lm src,  Vertical Transfer Vertical Transfer Horizontal Transfer T ner I T lm I D src I D tgt I Figure 1: Overview of the proposed model. LM training can benefit cross-domain NER. Figure 1 shows one example, where there are relatively large training data in the news domain but no data or a small amount of data in a target domain. We are interested in transferring NER knowledge from the news domain to the target domain by contrasting large raw data in both domains through cross-domain LM training. Naive multi-task learning by parameter sharing (Collobert and Weston, 2008) does not work effectively in this multi-task, multi-domain setting due to potential conflict of information. To achieve cross-domain information transfer as shown in the red arrow, two types of connections must be made: (1) cross-task links between NER and LM (for vertical transfer) and (2) cross-domain links (for horizontal transfer). We investigate a novel parameter generator network to this end, by decomposing the parameters θ of the NER or LM task on the source or target text domain into the combination θ = f(W, ID d , IT t ) of a set of meta parameters W, a task embedding vector IT t (t ∈{ner, lm}) and a domain embedding vector ID d (d ∈{src, tgt}), so that domain and task-correlations can be learned through similarities between the respective domain and task embedding vectors. 2465 In Figure 1, the values of W, {IT t }, {ID d } and the parameter generation network f(·, ·, ·) are all trained in a multi-task learning process optimizing NER and LM training objectives. Through the process, connections between the sets of parameters θsrc,ner, θsrc,lm, θtgt,ner and θtgt,lm are decomposed into two dimensions and distilled into two task embedding vectors IT ner, IT lm and two domain embedding vectors ID src, ID tgt, respectively. Compared with traditional multi-task learning, our method has a modular control over cross-domain and cross-task knowledge transfer. In addition, the four embedding vectors IT ner, IT lm, ID src and ID tgt can also be trained by optimizing on only three datasets for θsrc,ner, θsrc,lm and θtgt,lm, therefore achieving zero-shot NER learning on the target domain by deriving θtgt,ner automatically. Results on three different cross-domain datasets show that our method outperforms naive multitask learning and a wide range of domain adaptation methods. To our knowledge, we are the first to consider unsupervised domain adaptation for NER via cross-domain LM tasks and the first to work on NER transfer learning between domains with completely different entity types (i.e. news vs. biomedical). We released our data and code at https://github.com/ jiachenwestlake/Cross-Domain_NER. 2 Related Work NER. Recently, neural networks have been used for NER and achieved state-of-the-art results. Hammerton (2003) use a unidirectional LSTM with a Softmax classifer. Collobert et al. (2011) use a CNN-CRF architecture. Santos and Guimar˜aes (2015) extend the model by using character CNN. Most recent work uses LSTM-CRF (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016; Yang et al., 2018). We choose BiLSTM-CRF as our method since it gives stateof-the-art resutls on standard benchmarks. Cross-domain NER. Most existing work on cross-domain NER investigates the supervised setting, where both source and target domains have labeled data. Daum´e III (2009) maps entity label space between the source and target domains. Kim et al. (2015) and Obeidat et al. (2016) use label embeddings instead of entities themselves as the features for cross-domain transfer. Wang et al. (2018) perform label-aware feature representation transfer based on text representation learned by BiLSTM networks. Recently, parameters transfer approaches have seen increasing popularity for cross-domain NER. Such approaches first initialize a target model with parameters learned from source-domain NER (Lee et al., 2017) or LM (Sachan et al., 2018), and then fine-tune the model using labeled NER data from the target domain. Yang et al. (2017) jointly train source- and target-domain models with shared parameters, Lin and Lu (2018) add adaptation layers on top of existing networks. Except for Sachan et al. (2018), all the above methods use crossdomain NER data only. In contrast, we leverage both NER data and raw data for both domains. In addition, our method can deal with a zero-shot learning setting for unsupervised NER domain adaptation, which no existing work considers. Learning task embedding vectors. There has been related work using task vector representations for multi-task learning. Ammar et al. (2016) learn language embeddings for multi-lingual parsing. Stymne et al. (2018) learn treebank embeddings for cross-annotation-style parsing. These methods use “task” embeddings to augment word embedding inputs, distilling “task” characteristics into these vectors for preserving word embeddings. Liu et al. (2018) learn domain embeddings for multi-domain sentiment classification. They combine domain vectors with domainindependent representation of the input sentences to obtain a domain-specific input representation. A salient difference between our work and the methods above is that we use domain and task embeddings to obtain domain and task-specific parameters, rather than input representations. Closer in spirit to our work, Platanios et al. (2018) learn language vectors, using them to generate parameters for multi-lingual machine translation. While one of their main motivation is to save the parameter space when the number of langauges grows, our main goal is to investigate the modularization of transferable knowledge in a cross-domain and cross-task setting. To our knowledge, we are the first to study “task” embeddings in a multi-dimensional parameter decomposition setting (e.g. domain + task). 3 Methods The overall structure of our proposed model is shown in Figure 2. The bottom shows the com2466 Source Domain LM Task Target Domain NER Task Input Texts Word Rep. LSTM Hidden T ner I D src I W ner src, LSTM    Figure 2: Model architecture. bination of two domains and two tasks. Given an input sentence, word representations are first calculated through a shared embedding layer (Subsection 3.1). Then a set of task- and domainspecific BiLSTM parameters is calculated through a novel parameter generation network (Subsection 3.2), for encoding the input sequence. Finally, respective output layers are used for different tasks and domains (Subsection 3.3). 3.1 Input Layer Following Yang et al. (2018), given an input x = [x1, x2, . . . , xn] from a source-domain NER training set Sner = {(xi, yi)}m i=1 or target-domain NER training set Tner = {(xi, yi)}n i=1, a sourcedomain raw text set Slm = {(xi)}p i=1 or targetdomain raw text set Tlm = {(xi)}q i=1, each word xi is represented as the concatenation of its word embedding and the output of a character level CNN : vi = [ew(xi) ⊕CNN(ec(xi))], (1) where ew represents a shared word embedding lookup table and ec represents a shared character embedding lookup table. CNN(·) represents a standard CNN acting on a character embedding sequence ec(xi) of a word xi. ⊕represents vector concatenation. 3.2 Parameter Generation Network A bi-directional LSTM layer is applied to v = [v1, v2, . . . , vn]. To transfer knowledge across domains and tasks, we dynamically generate the parameters of BiLSTM using a Parameter Generation Network (f(·, ·, ·)). The resulting parameters are denoted as θd,t LSTM, where d ∈{src, tgt} and t ∈ {ner, lm} represent domain label and task label, respectively. More specifically: θd,t LSTM = W ⊗ID d ⊗IT t , (2) where W ∈RP (LSTM)× V ×U represents a set of meta parameters in the form of a 3rd-order tensor and ID d ∈RU , IT t ∈RV represent domain embedding and task embedding, respectively. U, V represent domain and task embedding sizes, respectively. P (LSTM) is the number of BiLSTM parameters. ⊗refers to tensor contraction. Given the input v and the parameter θd,t LSTM, the hidden outputs of a task and domain-specific BiLSTM unit can be uniformly written as: −→ h d,t i = LSTM(−→ h d,t i−1, vi, −→θ d,t LSTM) ←− h d,t i = LSTM(←− h d,t i+1, vi, ←−θ d,t LSTM), (3) for the forward and backward directions, respectively. 3.3 Output Layers NER. Standard CRFs (Ma and Hovy, 2016) are used as output layers for NER. Given h = [−→ h 1 ⊕ ←− h 1, . . . , −→ h n⊕←− h n], the output probability p(y|x) over label sequence y = l1, l2, . . . , li produced on input sentence x is: p(y|x)= exp{P i(wli CRF·hi+b (li−1,li) CRF )} P y′ exp{P i(w l′ i CRF·hi+b (l′ i−1,l′ i) CRF )} , (4) where y′ represents an arbitary labal sequence, and wli CRF is a model parameter specific to li, and b(li−1,li) CRF is a bias specific to li−1 and li. Considering that the NER label sets across domains can be different, we use CRF(S) and CRF(T) to represent CRFs for the source and target domains in Figure 2, respectively. We use the first-order Viterbi algorithm to find the highest scored label sequence. Language modeling. A forward LM (LMf) uses the forward LSTM hidden state −→ h = [−→ h 1, . . . , −→ h n] to compute the probability of next word xi+1 given x1:i, represented as pf(xi+1|x1:i). A backward LM (LMb) computes pb(xi−1|xi:n) based on backward LSTM hidden state ←− h = [←− h 1, . . . , ←− h n] in a similar manner. 2467 Considering the computational efficiency, Negative Sampling Softmax (NSSoftmax) (Mikolov et al., 2013; Jean et al., 2014) is used to compute forward and backward probabilities, respectively, as follows: pf(xi+1|x1:i)= 1 Z exp{w⊤ #xi+1 −→ h i+b#xi+1} pb(xi−1|xi:n)= 1 Z exp{w⊤ #xi−1 ←− h i+b#xi−1}, (5) where #x represents the vocabulary index of the target word x. w#x and b#x are the target word vector and the target word bias, respectively. Z is the normalization item computed by Z = X k∈{#x∪Nx} exp{w⊤ k hi + bk}, (6) where Nx represents the nagative sample set of the target word x. Each element in the set is a random number from 1 to the cross-domain vocabulary size. hi represents −→ h i in LMf and ←− h i in LMb, respectively. 3.4 Training Objectives NER. Given a manually labeled dataset Dner = {(xn, yn)}N n=1, the sentence-level negative loglikehood loss is used for training: Lner = − 1 |Dner| N X n=1 log(p(yn|xn)) (7) Language modeling. Given a raw data set Dlm = {(xn)}N n=1, LMf and LMb are trained jointly using Negative Sampling Softmax. Negative samples are drawn based on word frequency distribution in Dlm. The loss function is: Llm = − 1 2 |Dlm| N X n=1 T X t=1 { log(pf(xn t+1|xn 1:t)) + log(pb(xn t−1|xn t:T )) } (8) Joint training. To perform joint training for NER and language modeling on both the source and target domains, we minimize the overall loss: L= X d∈{src,tgt} λd(Ld ner + λtLd lm) + λ 2 ∥Θ∥2, (9) where λd is a domain weight and λt is a task weight. λ is the L2 regularization parameters and Θ represents the parameters set. Algorithm 1 Multi-task learning Input: training data {Sner, T ∗ ner} and {Slm, Tlm} Parameters: - Parameters Generator: W, {ID d }, {IT t } - Output layers: θcrfs,θcrft ∗,θnss Output: Target model 1: while training steps not end do 2: split training data into minibatches: Bners, Bnert∗, Blms, Blmt 3: # source-domain NER 4: θsrc,ner LSTM ←f(W, ID src, IT ner) 5: ∆W, ∆ID src, ∆IT ner, ∆θcrfs ←train(Bners) 6: # source-domain LM 7: θsrc,lm LSTM ←f(W, ID src, IT lm) 8: ∆W, ∆ID src, ∆IT lm, ∆θnss ←train(Blms) 9: if do supervised learning then 10: # target-domain NER 11: θtgt,ner LSTM ←f(W, ID tgt, IT ner) 12: ∆W, ∆ID tgt, ∆IT ner, ∆θcrft ←train(Bnert) 13: end if 14: # target-domain LM 15: θtgt,lm LSTM ←f(W, ID tgt, IT lm) 16: ∆W, ∆ID tgt, ∆IT lm, ∆θnss ←train(Blmt) 17: Update W, {ID}, {IT }, θcrfs, θcrft ∗, θnss 18: end while Note: * means none in unsupervised learning 3.5 Multi-Task Learning Algorithm We propose a cross-task and cross-domain joint training method for multi-task learning. Algorithm 1 provides the training procedure. In each training step (line 1 to 18), minibatches of the 4 tasks in Figure 1 take turns to train (lines 4-5, 7-8, 11-12 and 15-16, respectively). Each task first generates the parameters θd,t LSTM using W and their respective ID d , IT t , and then compute gradients for f(W, ID d , IT t ) and domain-specific output layer (θcrfs, θcrft or θnss). In the scenario of unsupervised learning, there is no training data of the target-domain NER, and lines 11-12 will not be executed. At the end of each training step, parameters of f(·, ·, ·) and private output layers are updated together in line 17. 4 Experiments We conduct experiments on three cross-domain datasets, comparing our method with a range of transfer learning baselines under both the supervised domain adaptation and the unsupervised domain adaptation settings. 2468 4.1 Experimental Settings Data. We take the CoNLL-2003 English NER data (Sang and Meulder, 2003) as our sourcedomain data. In addition, 377,592 sentences from the Reuters are used for source-domain LM training in unsupervised domain adaptation. Three sets of target-domain data are used, including two publicly available biomedical NER datasets, BioNLP13PC (13PC) and BioNLP13CG (13CG) 1 and a science and technology dataset we collected and labeled. Statistics of the datasets are shown in Table 1. CoNLL-2003 contains four types of entities, namely PER (person), LOC (location), ORG (organization) and MISC (miscellaneous). BioNLP13CG consists of five types, namely CHEM (Chemical), CC (cellular component), G/p (gene/protein), SPE (species) and CELL (cell), BioNLP13PC consists of three types of those entities: CHEM, CC and G/P. We use text of their training sets for language modeling training 2. For the science and technology dataset, we collect 620 articles from CBS SciTech News3, manually labeling them as a test set for unsupervised domain adaptation. It consists of four types of entities following the CoNLL-2003 standard. The numbers of each entity type are comparable to the CoNLL test set, as listed in Table 2. The main difference is that a great number of entities in the CBS News dataset are closely related to the domain of science and technology. In particular, for the MISC category, more technology terms such as Space X, bitcoin and IP are included, as compared with the CoNLL data set. Lack of such entities in the CoNLL training set and the difference of text genre cause the main difficulty in domain transfer. To address this difference, 398,990 unlabeled sentences from CBS SciTech News are used for LM training. We released this dataset as one contribution of this paper. Hyperparameters. We choose NCRF++ (Yang and Zhang, 2018) for developing the models. Our hyperparameter settings largly follow (Yang et al., 2018), with the following exceptions: (1) The batch size is set to 30 instead of 10 for shorter training time in multi-task learning; (2) RMSprop with a learning rate of 0.001 is used for our Sin1https://github.com/cambridgeltl/MTL-Bioinformatics2016 2We tried to use a larger number of raw data from the PubMed, but this did not improve the performances. 3https://www.cbsnews.com/ Dataset Type Train Dev Test CoNLL Sentence 15.0K 3.5K 3.7K Entity 23.5K 5.9K 5.6K BioNLP13PC Sentence 2.5K 0.9K 1.7K Entity 7.9K 2.7K 5.3K BioNLP13CG Sentence 3.0K 1.0K 1.9K Entity 10.8K 3.6K 6.9K CBS News Sentence 2.0K Entity 4.1K Table 1: Statistic of datasets. Dataset PER LOC ORG MISC CoNLL Train 6,600 7,140 6,321 3,438 Dev 1,842 1,837 1,341 922 Test 1,617 1,668 1,661 702 CBS News Test 1,660 629 1,352 497 Table 2: Entity numbers of the CoNLL dataset and the CBS SciTech News dataset. MultiTask -Target Figure 3: Development results on 13CG. gle Task Model (STM-TARGET) for the strongest baseline according to development experiments, while the multi-task models use SGD with a learning rate of 0.015 as (Yang et al., 2018). We use domain embeddings and task embeddings of size 8 to fit the model in one GPU of 8GB memory. The word embeddings for all models are initialized with GloVe 100-dimension vectors (Pennington et al., 2014) and fine-tuned during training. Character embeddings are randomly initialized. 4.2 Development Experiments We report a set of development experiments on the biomedical datasets 13PC and 13CG. Learning curves. Figure 3 shows the F1-scores against the number of training iterations on the 13CG development set. STM-TARGET is our single task model trained on the target-domain training set Tner; FINETUNE is a model pre-trained 2469 Figure 4: Joint training in multi-task learning. using the source-domain training data Sner and then fine-tuned using the target-domain data Tner; MULTITASK simultaneously trains source-domain NER and target-domain NER following Yang et al. (2017). For STM+ELMO, we mix the source- and target-domain raw data for training a contextualized ELMo representation (Peters et al., 2018), which is then used as inputs to an STM-TARGET model. This model shows a different way of transfer by using raw data, which is different from FINETUNE and MULTITASK. Note that due to differences in the label sets, FINETUNE and MULTITASK both share parameters between the two models except for the CRF layers. As can be seen from Figure 3, the F1 of all models increase as the number of training iteration increases from 1 to 50, with only small fluctuations. All of the models converge to a plateau range when the iteration number increases to 100. All transfer learning methods outperform the STMTARGET method, showing the usefulness of using source data to enhance target labeling. The strong performance of STM+ELMO over FINETUNE and MULTITASK shows the usefulness of raw text. By simultaneously using source-domain raw text and target-domain raw text, our model gives the best F1 over all iterations. Effect of language model for transfer. Figure 4 shows the results of source language modeling, target language modeling, source NER and target NER for both development datasets when the number of training iterations increases. As can be seen, multi-task learning under our framework brings benefit to all tasks, without being negatively influenced by potential conflicts between tasks (Bingel and Søgaard, 2017; Mou et al., 2016). Methods Datasets 13PC 13CG Crichton et al. (2017) 81.92 78.90 STM-TARGET 82.59 76.55 MULTITASK(NER+LM) 81.33 75.27 MULTITASK(NER) 83.09 77.73 FINETUNE 82.55 76.73 STM+ELMO 82.76 78.24 CO-LM 84.43 78.60 CO-NER 83.87 78.43 MIX-DATA 83.88 78.70 FINAL 85.54† 79.86† Table 3: F1-scores on 13PC and 13CG. † indicates that the FINAL results are statistically significant compared to all transfer baselines and ablation baselines with p < 0.01 by t-test. 4.3 Final Results on Supervised Domain Adaptation We investigate supervised transfer from CoNLL to 13PC and 13CG, comparing our model with a range of baseline transfer approaches. In particular, three sets of comparisons are made, including (1) a comparison between our method with other supervised domain adaptation methods, such as MULTITASK(NER) 4 and ELMo, (2) a comparison between the use of different subsets of data for transfer under our own framework and (3) a comparison with the current state-of-the-art in the literature for these datasets. (1) Comparison with other supervised transfer methods. We compare our method with STM-TARGET, MULTITASK(NER), FINETUNE and STM+ELMO. The observations are similar to those on the development set. Note that FINETUNE does not always improve over STMTARGET, which shows that the difference between the two datasets can hurt naive transfer learning, without considering domain descriptor vectors. ELMo. The ELMo methods use raw text via language model pre-training, which has been shown to benefit many NLP tasks (Peters et al., 2018). In our cross-domain setting, STM+ELMO gives a significant improvement over STM-TARGET on the 13CG dataset, but only a small improvement on the 13PC dataset. The overall improvements are comparable to that of MULTITASK only using the raw data. We also tried to use the ELMo model (Original) released by Peters 4Here MULTITASK(NER) is the same model as MULTITASK in the development experiments. 2470 Source Domain NER Target Domain NER Source Domain LM Target Domain LM Co-NER Co-LM Mix-Data Final Figure 5: Ablations of the model. et al. (2018) 5, which is trained over approximately 800M tokens. The results are 84.08% on 13PC and 79.57% on 13CG, respectively, which are lower compared to 85.54% and 79.86% by our method, respectively, despite the use of much larger external data. This shows the effectiveness of our model. Multi-task of NER and LM. We additionally compare our method with the naive multi-task learning setting (Collobert and Weston, 2008), which uses shared parameters for the four tasks but use the exact same data conditions as the FINAL model. which is shown in the MULTITASK(NER+LM) method in Table 3. The method gives an 81.33% F1 on 13PC and 75.27% on 13CG, which is much lower compared with all baseline models. This demonstrates the challenge of the cross-domain and cross-task setting, which contains conflicting information from different text genres and task requirements. (2) Ablation experiments. Now that we have compared our method with baselines utilizing similar data sources, we turn to investigate the influence of data sources on our own framework. As shown in Figure 5, we make novel use of 4 data sources for the combination of two tasks in two domains. If some sources are removed, our settings fall back to traditional transfer learning. For example, if the LM task is not considered, then the task setting is standard supervised domain adaptation. The baselines include (1) CO-LM, which represents our model without source-domain tasks, joint training the target-domain NER and language modeling, transferring parameters as: θt LSTM = W ⊗IT t , (t ∈{ner, lm}). (2) CO-NER, deleting tasks, jointly training source- and target-domain 5https://allennlp.org/elmo Figure 6: Influence of target-domain data. NER, transferring parameters as: θd LSTM = W ⊗ ID d , (d ∈{src, tgt}). (3) MIX-DATA, which uses the same NER data in source- and target-domain as FINAL, but also uses combined raw text to train source- and target-domain language models. Our method outperforms all baselines significantly, which shows the importance of using rich data. A contrast between our method and MIXDATA shows the effectiveness of using two different language models across domains. Even through MIX-DATA uses more data for training language models on both the source and target domains, it cannot learn a domain contrast since both sides use the same mixed data. In contrast, our model gives significantly better results by gleaning such contrast. (3) Comparison with current state-of-the-art. Finally, Table 3 also shows a comparison with a state-of-the-art method on the 13PC and 13CG datasets (Crichton et al., 2017), which leverages POS tagging for multi-task learning by using cotraining method. Our model outperforms their results, giving the best results in the literature. Discussion. When the number of target-domain NER sentences is 0, the transfer learning setting is unsupervised domain adaptation. As the number of target domain NER sentences increases, they will intuitively play an increasingly important role for target NER. Figure 6 compares the F1-scores of the baseline STM-TARGET and our multi-task model with varying numbers of target-domain NER training data under 100 training epochs. In the nearly unsupervised setting, our method gives the largest improvement of 20.5% F1-scores. As the number of training data increases, the gap between the two methods becomes smaller. But our method still gives a 3.3% F1 score gain when the number of training sentences reach 3,000, show2471 MultiTask MultiTask -Target -Target Figure 7: Fine-grained comparisons on 13PC and 13CG. ing the effectiveness of LM in knowledge transfer. Figure 7 shows fine-grained NER results of all available entity types. In comparison to STM-TARGET, FINETUNE and MULTITASK, our method outperforms all the baselines on each entity type, which is in accordance with the conclusion of development experiments. 4.4 Unsupervised Domain Adaptation For unsupervised domain adaptation, many settings in Subsection 4.2 do not hold, including STM-TARGET, FINETUNE, MULTITASK, COLM and CO-NER. Instead, we add a naive baseline, STM-SOURCE, which directly applies a model trained on the source-domain CoNLL-2003 data to the target domain. In addition, we compare with models that make use of source NER, source LM and target LM data, including SELFTRAIN, which improves a source NER model on target raw text (Daum´e III, 2008). STM-ELMO, which uses ELMo embeddings trained over combined source- and target-domain raw text for STMSOURCE, STM-ELMO(SRC), which uses only the source-domain raw data for training ELMo, STMELMO(TGT), which uses only the target-domain raw text for training ELMo, and DANN (Ganin et al., 2016), which performs generative adversarial training over source- and target-domain raw data. Final results. The final results are shown in Table 4. SELF-TRAIN gives better results compared with the STM-SOURCE baseline, which shows the effectiveness of target-domain raw data. Adversarial training brings significantly better improvements compared with naive self-training. Among ELMo methods, the model using both the source-domain raw data and target-domain raw data outperforms the model using only the sourceor target-domain raw data. ELMo also outperMethods P R F1 STM-SOURCE 63.87 71.28 67.37 SELF-TRAIN 62.56 75.04 68.24 DANN(Ganin et al., 2016) 65.14 73.84 69.22 STM+ELMO(SRC) 65.43 70.14 67.70 STM+ELMO(TGT) 67.78 72.73 70.17 STM+ELMO 67.19 74.93 70.85 Ours 68.48 79.52 73.59† Table 4: Three metrics on CBS SciTech News. We use the CoNLL dev set to select the hyperparameters of our models. ELMo and Ours are given the same overall raw data, SELF-TRAIN and DANN use the selected raw data from overall raw data for better performances. † indicates that our results are statistically significant compared to all baselines with p < 0.01 by t-test. Figure 8: Amount of raw data. forms DANN, which shows the strength of LM pre-training. Interestingly, ELMo with targetdomain raw data gives similar accuracies to ELMo with mixed source- and target-domain data, which shows that target-domain LM is more useful for the pretraining method. It also indicates that our method makes better use of LMs over two different domains. Compared with all baseline models, our model gives a final F1 of 73.59, significantly better than the best result of 70.85 obtained by STM+ELMO, demonstrating the effectiveness of parameter generation network for cross-task, cross-domain knowledge transfer. Influence of raw text. For zero-shot learning, domain adaptation is achieved solely through LM channels. We thus compare the effectiveness of raw text from both the source domain and the target domain. Figure 8 shows the results. The line “SRC: varying; TGT: varying” shows the F1scores against varying numbers of raw sentences in both source and target domains. Each number in the x-coordinate indicates an equal amount of source- and target-domain text. As can be seen, increasing raw text gives increased F1 for 2472 Entity Type Correct Num ∆ STM Ours PER 1,501 1,569 +4.10% LOC 469 512 +6.84% ORG 941 1,050 +8.06% MISC 134 193 +11.87% Total 3,045 3,324 +6.74% Table 5: Growth rate of correctly recognized enetity number in comparison with the STM-SOURCE. ∆represents the growth with respect to the total number of entities in the CBS SciTech News test set. Sentence Brittany Kaiser spoke to “CBS This Morning” co-host John Dicherson for her first U.S. broadcast network interview. STM-SRC Brittany Kaiser ORG spoke to “ CBS ORG This Morning” ... DANN Brittany Kaiser PER spoke to “ CBS This Morning ORG” ... Ours Brittany Kaiser PER spoke to “ CBS This Morning MISC” ... Table 6: Example. Red and green represent incorrect and correct entities, respectively. NER, which demonstrates effective use of raw data by our method. The lines “SRC: 100%; TGT: varying” and “SRC: varying; TGT: 100%” show to alternative measures by fixing the sourceand target-domain raw text to 100% of our data, and then varying only the other domain text. A comparison between the two lines shows that the target-domain raw data gives more influence to the domain adaptation power, which conforms to intuition. Discussion. Table 5 shows a breakdown for the improvement of our model over STM-SOURCE by different entity types. Compared with PER, LOC and ORG names, our method brings the most improvements over MISC entities, which are mostly types that are specific to the technology domain (see Subsection 4.1). Intuitively, the amount of overlap is the weakest for this type of entities between raw text from source and target domains. Therefore, the results show the effectiveness of our method in deriving domain contrast with respect to NER from cross-domain language modeling. Table 6 shows a case study, where “Brittany Kaiser” is a personal name and “CBS This Morning” is a programme. Without using raw text, STM-SOURCE misclassifies “Brittany Kaiser” as ORG. Both DANN and our method give the correct results because the name is mentioned in raw text, from which connections between the pattern “PER spoke” can be drawn. With the help of raw text, DANN and our method can also recognize “CBS This Morning” as a entity, which has a common pattern of consecutive capital letters in both source and target domains. DANN misclassifies “CBS This Morning” as ORG. In contrast, our model can classify it correctly as the category of MISC, in which most entities are specific to the target domain (see Subsection 4.1). This is likely because adversarial training in DANN aims to match feature distributions between source and target domains by mimicing the domain discriminator, which can lead to concentration on domain common features but confusion about such domain-specific features. This demonstrates the advantage of our method in deriving both domain common and domain-specific features. 5 Conclusion We considered NER domain adaptation by extracting knowledge of domain differences from raw text. For this goal, cross-domain language modeling is conducted through a novel parameter generation network, which decomposes domain and task knowledge into two sets of embedding vectors. Experiments on three datasets show that our method is highly effective among supervised domain adaptation methods, while allowing zeroshot learning in unsupervised domain adaptation. Acknowledgments The three authors contributed equally to this work. Yue Zhang is the corresponding author. We gratefully acknowledge funding from NSFC (grant #61572245). We also thank the anonymous reviewers for their helpful comments and suggestions. References Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431–444. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (Short Papers), volume 2, pages 164–169. Association for Computational Linguistics. Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. 2473 Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, pages 160–167. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(1):2493–2537. Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. 2017. A neural network multi-task learning approach to biomedical named entity recognition. BMC Bioinformatics, 18(1):368. Hal Daum´e III. 2009. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256–263. Association for Computational Linguistics. Hal Daum´e III. 2008. Cross-task knowledgeconstrained self training. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, volume 1, pages 680–688. Association for Computational Linguistics. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096–2030. James Hammerton. 2003. Named entity recognition with long short-term memory. In Proceedings of the 7th Conference on Natural Language Learning at HLT-NAACL, volume 4, pages 172–175. Association for Computational Linguistics. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1–10. Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015. New transfer learning techniques for disparate label sets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Long Papers), volume 1, pages 473–482. Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. Ji Young Lee, Franck Dernoncourt, and Peter Szolovits. 2017. Transfer learning for named-entity recognition with neural networks. Computing Research Repository, arXiv:1705.06273. Version 1. Bill Yuchen Lin and Wei Lu. 2018. Neural adaptation layers for cross-domain named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2012–2022. Association for Computational Linguistics. Qi Liu, Yue Zhang, and Jiangming Liu. 2018. Learning domain representation for multi-domain sentiment classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Long Papers), volume 1, pages 541–550. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Long Papers), volume 1, pages 1064–1074. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016. How transferable are neural networks in nlp applications? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 479–489. Association for Computational Linguistics. Rasha Obeidat, Xiaoli Fern, and Prasad Tadepalli. 2016. Label embedding approach for transfer learning. In International Conference on Biomedical Ontology and BioCreative. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, volume 4, pages 1532–1543. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Long Papers), volume 1, pages 2227– 2237. Association for Computational Linguistics. 2474 Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom M. Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 425–435. Association for Computational Linguistics. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Long Papers), volume 1, pages 2121–2130. Association for Computational Linguistics. Devendra Singh Sachan, Pengtao Xie, and Eric P. Xing. 2018. Effective use of bidirectional language modeling for medical named entity recognition. Proceedings of Machine Learning Research, 85:1–19. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Association for Computational Linguistics. Cicero Nogueira dos Santos and Victor Guimar˜aes. 2015. Boosting named entity recognition with neural character embeddings. In Proceedings of the Fifth Named Entity Workshop, joint with 53rd ACL and the 7th IJCNLP, pages 25–33. Association for Computational Linguistics. Sara Stymne, Miryam de Lhoneux, Aaron Smith, and Joakim Nivre. 2018. Parser training with heterogeneous treebanks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 619–625. Association for Computational Linguistics. Zhenghui Wang, Yanru Qu, Liheng Chen, Shen Jian, Weinan Zhang, Shaodian Zhang, Yimei Gao, Gen Gu, Ken Chen, and Yu Yong. 2018. Label-aware double transfer learning for cross-specialty medical named entity recognition. In Proceedings of NAACL-HLT 2018, pages 1–15. Association for Computational Linguistics. Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879–3889. Jie Yang and Yue Zhang. 2018. Ncrf++: An opensource neural sequence labeling toolkit. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics-System Demonstrations, pages 74–79. Association for Computational Linguistics. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In International Conference on Learning Representations.
2019
236
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2475–2485 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2475 Graph-based Dependency Parsing with Graph Neural Networks Tao Ji , Yuanbin Wu , and Man Lan Department of Computer Science and Technology, East China Normal University {taoji.cs}@gmail.com {ybwu,mlan}@cs.ecnu.edu.cn Abstract We investigate the problem of efficiently incorporating high-order features into neural graph-based dependency parsing. Instead of explicitly extracting high-order features from intermediate parse trees, we develop a more powerful dependency tree node representation which captures high-order information concisely and efficiently. We use graph neural networks (GNNs) to learn the representations and discuss several new configurations of GNN’s updating and aggregation functions. Experiments on PTB show that our parser achieves the best UAS and LAS on PTB (96.0%, 94.3%) among systems without using any external resources. 1 Introduction In recent development of dependency parsers, learning representations is gaining in importance. From observed features (words, positions, POS tags) to latent parsing states, building expressive representations is shown to be crucial for getting accurate and robust parsing performances. Here we focus on graph-based dependency parsers. Given a sentence, a parser first scores all word pairs about how possible they hold valid dependency relations, and then use decoders (e.g., greedy, maximum spanning tree) to generate a full parse tree from the scores. The score function is a key component in graph-based parses. Commonly, a neural network is assigned to learn low dimension vectors for words (i.e., nodes of parse trees), and the score function depends on vectors of the word pair (e.g., inner products). The main task of this paper is to explore effective encoding systems for dependency tree nodes. Two remarkable prior works on node representation are recurrent neural networks (RNNs) (Kiperwasser and Goldberg, 2016b) and biaffine mappings (Dozat and Manning, 2017). RNNs are powerful tools to collect sentence-level information, but the representations ignore features related to dependency structures. The biaffine mappings improve vanilla RNNs via a key observation: the representation of a word should be different regarding whether it is a head or a dependent (i.e., dependency tree edges are directional). Therefore, Dozat and Manning (2017) suggest distinguishing head and dependent vector of a word. Following this line of thought, it is natural to ask whether we can introduce more structured knowledge into node representations. In other words, if biaffine mappings encode the first order parent-children relations, can we incorporate other high-order relations (such as grandparents and siblings)? In this work, we propose to use graph neural networks (GNNs) for learning dependency tree node representations. Given a weighted graph, a GNN embeds a node by recursively aggregating node representations of its neighbours. For the parsing task, we build GNNs on weighted complete graphs which are readily obtained in graphbased parsers. The graphs could be fixed in prior or revised during the parsing process. By stacking multiple layers of GNNs, the representation of a node gradually collects various high-order information and bring global evidence into decoders’ final decision. Comparing with recent approximate highorder parsers (Kiperwasser and Goldberg, 2016b; Zheng, 2017; Ma et al., 2018), GNNs extract highorder information in a similar incremental manner: node representations of a GNN layer are computed based on outputs of former layers. However, the main difference is that, instead of extracting highorder features on only one intermediate tree, the update of GNN node vectors is able to inspect all intermediate trees. Thus, it may reduce the influence of a suboptimal intermediate parsing result. Comparing with the syntactic graph network 2476 (Marcheggiani and Titov, 2017; Bastings et al., 2017; Zhang et al., 2018b) which runs GNNs on dependency trees given by external parsers, we use GNNs to build the parsing model. And instead of using different weight matrices for outgoing and ingoing edges, our way of handling directional edges is based on the separation of head and dependent representations, which requires new protocols for updating nodes. We discuss various configurations of GNNs, including strategies on neighbour vector aggregations, synchronized or asynchronized node vector update and graphs with different edge weights. Experiments on the benchmark English Penn Treebank 3.0 and CoNLL2018 multilingual parsing shared task show the effectiveness of the proposed node representations, and the result parser is able to achieve state-of-the-art performances. To summarize, our major contributions include: 1. introducing graph neural networks to dependency parsing, which aims to efficiently encode high order information in dependency tree node representations. 2. investigating new configurations of GNNs for handling direct edges and nodes with multiple representations. 3. achieving state-of-the-art performances on PTB 3.0 (96.0% UAS, 94.3% LAS). 2 Basic Node Representations In this section, we review word encoding systems used in recurrent neural networks and biaffine mappings. Our GNN encoder (Section 3) will base on these two prior works. 1 Given a sentence s = w1, . . . , wn, we denote a dependency tree of s to be T = (V, E), where the node set V contains all words and a synthetic root node 0, and the edge set E contains node pairs (i, j, r) which represents a dependency relation r between wi (the head) and wj (the dependent). Following the general graph-based dependency parsing framework, for every word pair (i, j), a function σ(i, j) assigns it a score which measures how possible is wi to be the head of 1Following the convention of (Dozat and Manning, 2017), we use lowercase italic letters for scalars and indices, lowercase bold letters for vectors, uppercase italic letters for matrices. wj. 2 We denote G to be the directed complete graph in which all nodes in V are connected with weights given by σ. The correct tree T is obtained from G using a decoder (e.g., dynamic programming (Eisner, 1996), maximum spanning tree (McDonald et al., 2005), and greedy algorithm (Zhang et al., 2017)). In neural-network-based models, the score function σ(i, j) usually relies on vector representations of nodes (words) i and j. How to get informative encodings of tree nodes is important for training the parser. Basically, we want the tree node encoder to explore both the surface form and deep structure of the sentence. To encode the surface form of s, we can use recurrent neural networks (Kiperwasser and Goldberg, 2016b). Specifically, we apply a bidirectional long short-term memory network (biLSTM, (Hochreiter and Schmidhuber, 1997)). At each sentence position i, a forward LSTM chain (with parameter → θ) computes a hidden state vector →c i by collecting information from the beginning of s to the current position i. Similarly, a backward LSTM chain ( ← θ) collects information ←c i from the end of s to the position i: →c i = LSTM(xi, →c i−1; → θ ), ←c i = LSTM(xi, ←c i+1; ← θ ), where xi is the input of a LSTM cell which includes a randomly initialized word embedding e(wi), a pre-trained word embedding e′(wi) from Glove (Pennington et al., 2014) and a trainable embedding of wi’s part-of-speech tag e(posi), xi = ( e(wi) + e′(wi) ) ⊕e(posi). Then, a context-dependent node representation of word i is the concatenation of the two hidden vectors, ci = →c i ⊕ ←c i. (1) With the node representations, we can define the score function σ using a multi-layer perceptron σ(i, j) = MLP(ci ⊕cj) (Pei et al., 2015), or using a normalized bilinear function (A, b1, b2 are parameters), σ(i, j)= Softmaxi (c⊺ i Acj + b⊺ 1ci + b⊺ 2cj) ≜P(i|j), (2) 2We will focus on the unlabelled parsing when illustrating our parsing models. For predicting labels, we use the identical setting in (Dozat and Manning, 2017). 2477 x1 x2 x3 x4 GNN Layers RNN Encoder Decoder MST x1 x2 x3 x4 Figure 1: The GNN architecture. “RNN Encoder”+“Decoder” is equal to the Biaffine parser. For the “GNN Layers”, each layer is based on a complete weighted graph, and the weights are supervised by the layer-wise loss. which is actually a distribution on j’s head words. We note that from the RNN encoder, a node only obtains one vector representation. But as the dependency tree edges have directions, a word plays a different role regarding it is the head or the dependent in an edge. Thus, instead of using one vector representation, we employ two vectors to distinguish the two roles (Dozat and Manning, 2017). Concretely, based on ci, we use two multilayer perceptrons to generate two different vectors, hi = MLPh(ci), di = MLPd(ci). The score funcion in Equation 2 now becomes σ(i, j) = Softmaxi (h⊺ i Adj + b⊺ 1hi + b⊺ 2dj) .(3) The main task we will focus on in following sections is to further encode deep structure of s to node vectors hi and di. Specifically, besides the parent-child relation, we would like to consider high-order dependency relations such as grandparents and siblings in the score function σ. 3 Node Representation with GNNs 3.1 The GNN Framework We first introduce the general framework of graph neural network. The setting mainly follows the graph attention network (Velikovi et al., 2018). 3 Given a (undirected) graph G, a GNN is a multilayer network. At each layer, it maintains a set of node representations by aggregating information from their neighbours. 3There are other variants of GNNs. See (Battaglia et al., 2018) for a more general definition. Formally, let N(i) be neighbours of node i in G. We denote vt i to be the vector representation of i at the t-th GNN layer. vt i is obtained by vt i = g  W ∑ j∈N(i) αt ijvt−1 j + Bvt−1 i  , (4) where g is a non-linear activation function (we use LeakyReLU with negative input slope 0.1), W and B are parameter matrices. We use different edge weights αt ij, which is a function of vt−1 i and vt−1 j , to indicate different contributions of node j in building vt i. The update Equation 4 reads that the new representation vt i contains both the previous layer vector vt−1 i and a weighted aggregation of neighbour vectors vt−1 j . We can see that the GNN naturally catches multi-hop (i.e., high-order) relations. Taking the first two layers for example, for every node i at the second layer, v2 i contains information of its 1-hop neighbours v1 j . Since v1 j has already encoded its own 1-hop neighbours at the first layer, v2 i actually encodes information of its 2-hop neighbours. Inspired by this observation, we think GNNs may help parsing with high-order features. On the other side, to parse with GNNs, instead of encoding one vector for each node, we need to handle the head representation hi and the dependent representation di simultaneously on a directed graph G. Furthermore, to approximate the exact highorder parsing (Eisner, 1996; McDonald and Pereira, 2006), we need each GNN layer to have a concrete meaning regarding parsing the sentence. For example, we could consider complete graphs 2478 (a) Grandparent (b) Grandchild (c) Sibling Figure 2: Three types of high-order information integrated in the parent-child pair (j, i). The grey shadows indicate which node representations already exist in first order feature. The orange shadows indicate which node representations should to be included for each high-order feature. Notice that k is actually a weighted sum of all applicable nodes (soft). Subfigure (a) helps to understand Equation 6. Since k acts as parent of j, to capture grandparent feature, hj should additionally contains information of hk. Subfigure (c) helps to understand Equation 7. Since k acts as child of j, to capture sibling feature, hj should additionally contains information of dk. (i.e., all nodes are connected) and set edge weights using conditional probabilities, αt ij = σt(i, j) = P t(i|j), (5) which is Equation 3 evaluated at layer t. 4 Thus, the graph at each layer appears as a “soft” parse tree, and the aggregated information would approximate high-order features on that tree. Comparing with existing incremental parsers which maintain only one intermediate tree (“hard”), the “soft” trees represented by GNN layers contain more information. In fact, the graphs keep all information to derive any intermediate parse trees. Therefore, it may reduce the risk of extracting high-order features on suboptimal intermediates. We detail the GNN model in the following. 3.2 High-order Information Given a node i, we mainly focus on three types of high-order information, namely, grandparents, grandchildren and siblings. We need to adapt the general GNN update formula to properly encode them into node representations. First, for incorporating grandparent information (Figure 2.a), we expect σt(j, i), which depends on the head vector of j and the dependent vector of i, not only considers the parent-child pair (j, i), but also consults the (“soft”) parent of j suggested by the previous layer (denoted by k). Specifically, the new head representation of node j should examine representations of its neighbors when they 4The model adds layer-wise loss functions to approach Equation 5, see Section 3.5. act as parents of j. In other word, we will update ht j using ht−1 k . Similarly, for encoding grandchildren of j in σt(j, i) (also denoted by k), we need the new dependent representation of node i examine its neighbors when they act as children of i. Thus, we will update dt i using dt−1 k . It suggests the following protocol,        ht i = g ( W1 ∑ j∈N (i) αt jiht−1 j + B1ht−1 i ) dt i = g ( W2 ∑ j∈N (i) αt ijdt−1 j + B2dt−1 i ) . (6) Note that we use αt ji in updating ht i and αt ji in updating dt i which is according to the probabilistic meaning of the weights. On the other side, for extracting siblings of i (again denoted by k) in (j, i) (Figure 2.c), the new head representation of node j should examine representations of its neighbors when they act as dependents of j. We expect the update of ht j involving dt−1 k It suggests our second update protocol 5,        ht i = g ( W1 ∑ j∈N (i) αt ijdt−1 j + B1ht−1 i ) dt i = g ( W2 ∑ j∈N (i) αt jiht−1 j + B2dt−1 i ) . (7) We can integrate Equation 6 and 7 in a single update which handles grandparents, grandchildren and siblings in an uniform way,        ht i = g ( W1 ∑ j∈N (i) (αt jiht−1 j + αt ijdt−1 j ) + B1ht−1 i ) dt i = g ( W2 ∑ j∈N (i) (αt ijht−1 j + αt jidt−1 j ) + B2dt−1 i ) . (8) Comparing with the general GNNs, above node vector updates are tailored to the parsing task using high-order feature rules. We think exploring the semantics of representations and graph weights would provide useful guidance in design of GNNs for specific tasks. Finally, besides the default synchronized setting, we also investigate asynchronized version of Equation 8,          h t−1 2 i =g ( W1 ∑ j∈N (i) (αt jiht−1 j +αt ijdt−1 j )+B1ht−1 i ) dt i =g ( W2 ∑ j∈N (i) (αt ijh t−1 2 j +αt jidt−1 j )+B2dt−1 i ) , (9) where we first update h, and then use the updated h to update d. 5The update of dt i in Equation 7 tries to include knowledge of other candidate heads of i. It does not correspond to a high-order feature, but for building a symmetric formula, we just include it in that way. 2479 3.3 Graph Weights In the graph-based parsing, the topology structure of G is mainly determined by edge weights αt ij. In fact, we usually work on a complete graph to obtain a parse tree. Thus, how to design αt ij is important to apply GNNs. As mentioned above, we can set αt ij equals to probability P t(i|j). In this section, we explore more settings on αt ij. First, instead of using the “soft” tree setting, we can assign {0, 1} values to αt ij to obtain a sparse graph, αt ij = { 1, i = arg maxi′ P t(i′|j) 0, otherwise , (10) In this setting, a node only looks at the head node with the highest probability. An extension of Equation 10 is to consider topk head nodes, which could include more neighbourhood information. Defining N t k(j) be a set of nodes with top-k P t(i|j) for node j, we renormalize Equation 3 on this set and assign them to αt ij, αt ij = { Softmaxi (h⊺ i Adj + b⊺ 1hi + b⊺ 2dj) , i ∈N t k(j) 0, otherwise (11) Finally, for comparison, one can ignore P t(i|j) and see each neighbour equally at each layer, αt ij = 1 n, ∀j ∈V, i ∈V/{j}. (12) 3.4 Decoding Given node representations and P(i|j), to build the final parse tree, we can either greedily set the head of wj to arg maxiP(i|j) which is fast for decoding but may output an ill-formed tree, or use a MST algorithm on all word pairs with weight P(i|j), which forms a valid tree but could be slower. To predict labels of dependency edges, we introduce P(r|i, j) which measures how possible a tree (i, j) holds a dependency relation r using another MLP. The setting is identical to the biaffine parser (Dozat and Manning, 2017). 3.5 Training Given the gold standard tree T, the training objective consists of two parts. First, we have a decoder behind the final GNN layer (denote by τ) which will perform decoding on both tree structures (using P τ(i|j)) and edge labels (using P(r|i, j)). The loss from the final classifier is negative loglikelihood of T, L0 = −1 n ∑ (i,j,r)∈T (log P τ(i|j) + log P(r|i, j)) . Second, as mentioned in Section 3.1, we can provide supervision on P t(i|j) from each GNN layer (only on the tree structure, intermediate loss on labels are ignored). The layer-wise loss is L′ = τ ∑ t=1 Lt = τ ∑ t=1 −1 n ∑ (i,j,r)∈T log P t(i|j). The objective is to minimize a weighted combination of them L = λ1L0 + λ2L′. 4 Experiments We evaluate the proposed framework on the Stanford Dependency (SD) conversion of the English Penn Treebank (PTB 3.0) and the Universal Dependencies (UD 2.2) (Nivre et al., 2018) treebanks used in CoNLL 2018 shared task(Zeman et al., 2018). For English, we use the standard train/dev/test splits of PTB (train=§2-21, dev=§22, test=§23), POS tags were assigned using the Stanford tagger with 10-way jackknifing of the training corpus (accuracy ≈97.3%). For 12 languages selected from UD 2.2, we use CoNLL 2018 shared task’s official train/dev/test splits, POS tags were assigned by the UDPipe (Straka et al., 2016). Parsing performance is measured with five metrics. We report unlabeled (UAS) and labeled attachment scores (LAS), unlabeled (UCM) and labeled complete match (LCM), and label accuracy score (LA). For evaluations on PTB, following (Chen and Manning, 2014), five punctuation symbols (“ ” : , .) are excluded from the evaluation. For CoNLL 2018 shared task, we use the official evaluation script. All basic hyper-parameters are the same as those reported in Dozat and Manning (2017), which means that our baseline system without GNN layers is a re-implementation of the Biaffine parser. For GNN models, the only new parameters are matrices in P t(i|j) and matrices in GNN units. The λ1, λ2 in objective L is set to λ1 = 1, λ2 = 0.5. The hyper-parameters of our default settings are summarized in Appendix A. The default setting for our final parser is a 2layer GNN model that uses hd ▷h (Equation 8) 2480 Test Parser UAS LAS (Chen and Manning, 2014) T 91.8 89.6 (Dyer et al., 2015) 93.1 90.9 (Ballesteros et al., 2016) 93.56 92.41 (Weiss et al., 2015) 94.26 91.42 (Andor et al., 2016) 94.61 92.79 (Ma et al., 2018) § 95.87 94.19 (Kiperwasser and Goldberg, 2016a) § G 93.0 90.9 (Kiperwasser and Goldberg, 2016b) 93.1 91.0 (Wang and Chang, 2016) 94.08 91.82 (Cheng et al., 2016) 94.10 91.49 (Kuncoro et al., 2016) 94.26 92.06 (Zheng, 2017) § 95.53 93.94 (Dozat and Manning, 2017) 95.74 94.08 Baseline G 95.68 93.96 Our Model § 95.97 94.31 Table 1: Results on the English PTB dataset. The § indicates parsers using high-order features. “T” represents transitionbased parser, and “G” represents a graph-based parser. aggregating function and “H-first” asynchronous update method (Equation 9). 6 4.1 Main Results Firstly, we compare our method with previous work (Table 1). The first part contains transitionbased models, the second part contains graphbased models and the last part includes three models with integrated hard high-order features. In general, our proposed method achieves significant improvements over our baseline biaffine parser and matches state-of-the-art models. In particular, it achieves 0.29 percent UAS and 0.35 percent LAS improvement over the baseline parser, and 0.1 percent UAS and 0.12 percent LAS improvement over the strong transition-based parser (Ma et al., 2018). It shows that our method can boost the performance of graph-based dependency parser using the global and soft high-order information by the GNN architecture. Secondly, we analyze different aggregating functions when capturing high-order information. (Table 2). We have some observations regarding this results. Model hd ▷h (Equation 8) integrates high-order information of grandparents, grandchildren and siblings. Under all layer settings (1 to 3), its LAS is always better than h ▷h (Equation 6) model and d ▷h (Equation 7) model, which separately describe high-order information. However, UAS is not sensitive to different ways of aggregating. 6Our implementation is publicly available at: https: //github.com/AntNLP/gnn-dep-parsing GNN GNN Dev Test Layer Model UAS LAS UAS LAS l = 0 Baseline 95.58 93.74 95.68 93.96 l = 1 d ▷h 95.75 93.84 95.83 94.15 h ▷h 95.78 93.80 95.91 94.12 hd ▷h 95.77 93.87 95.88 94.23 l = 2 d ▷h 95.80 93.85 95.88 94.17 h ▷h 95.77 93.83 95.85 94.13 hd ▷h 95.79 93.90 95.92 94.24 l = 3 d ▷h 95.74 93.78 95.87 94.14 h ▷h 95.75 93.80 95.90 94.15 hd ▷h 95.71 93.82 95.93 94.22 Table 2: Impact of l and different high-order information integration methods on PTB dataset. “d ▷h” corresponds to the Equation 7, “h ▷h” corresponds to the Equation 6, “hd ▷h” corresponds to the Equation 8. Thirdly, we analyze the contributions and effects of the number of GNN layers (Figure 3 (a)). From the computation of GNNs, the more layers, the higher order of information is captured. The experimental results show that the 1-layer model significantly outperforms 0-layer model on all five scoring metrics. But continuing to increase the number of layers does not significantly improve performance. Previous work (Zheng, 2017) has shown that the introduction of more than secondorder information does not significantly improve parsing performances. Our results also present a consistent conclusion. Specifically, on UAS, LAS and LA, the 2-layer model has the highest sum of scores. On UCM and LCM, performance increases as the number of layers increases, showing the superiority of using high-order information in complete sentence parsing. In addition to parsing performance, we also focus on the speed. We observe that adding one layer of GNN slows down the prediction speed by about 2.1%. The 2-layer model can process 415.9 sentences per second on a single GPU. Its impact on the training process is also slight, increasing from 3 minutes to 3.5 minutes per epoch. We futher examine different performance of each layer in a 3-layer model (Figure 3 (b)). We observe that, as we move to a higher layer, the average loss decreases during the training process (L3 < L2 < L1). The figure shows that the introduction of high-order information leads to more accurate graph weights. We also do the MST decoding directly based on the graph weights on each layer and compare their development set UAS performances. From the layer-wise UAS 2481 95.6 95.7 95.8 95.9 UAS 93.9 94.0 94.1 94.2 LAS 96.2 96.3 96.4 96.5 LA 58.5 59.0 59.5 60.0 60.5 UCM 47.5 48.0 48.5 49.0 LCM 400 410 420 430 Sent/s 0 1 2 3 (a) 20 40 60 80 100 0.4 0.3 0.2 Loss 1 2 3 20 40 60 80 100 0.93 0.94 0.95 UAS UAS@ 1 UAS@ 2 UAS@ 3 (b) Figure 3: (a) Parsing performance and speed of different layers of our hd ▷h model on the test set. (b) Layer-wise training loss and development set’s UAS of our 3-layer hd ▷h model. GNN GNN Dev Test Layer Model UAS LAS UAS LAS l = 2 Synch 95.79 93.90 95.92 94.24 H-first 95.88 93.94 95.97 94.31 D-first 95.78 93.91 95.95 94.27 Table 3: Impact of different GNN update methods on PTB dataset. “Synch” is our default synchronized setting (Equation 8). “H-first” is an asynchronous update method that first updates head word representation (Equation 9). Similarly, the “D-first” model first updates dependent word representation. results, we observe that the difference between 2-layer and 3-layer is not obvious, but both are higher than the 1-layer. Fourthly, we present the influences of synchronized/asynchronized GNN update methods (Table 3). We first compare the synchronous update and asynchronous update methods. It shows that the later one works better without adding extral parameters. The reason may be that asynchronous methods aggregate high-order information earlier. The H-first model (Equation 9) is slightly better than the D-first model. This may indicate that dependent representation is more important than head representation, since the first updated representation will improve the representation of the late update, Fifthly, we experiment with unweighted graph (all set to 1) and hard weight graph (renormalized at top-k) (Table 4). A GNN based on completely unweighted graph is equivalent to uniformly incorporating representations of all neighbors for each node in the sentence, and similar to incorporating sentence embedding. Experiments show GNN GNN Dev Test Layer Model UAS LAS UAS LAS l = 2 All=1 95.71 93.73 95.76 94.07 Hard-1 95.69 93.70 95.80 94.13 Hard-2 95.73 93.78 95.90 94.20 Hard-3 95.81 93.88 95.88 94.20 l = 2 Soft 95.88 93.94 95.97 94.31 Table 4: Impact of different kinds of graph weights on PTB dataset. “All=1” means setting all weights to 1 (Equation 12), “Hard-k” means renormalization at the top-k weights of each node (Equation 11), “Soft” is our default model setting (Equation 8). that this approach will hurt the performance of the parser. For the Hard-k model (Equation 11), when k is equal to 1, it is equivalent to a GNN based on greedy decoding results, when k is equal to the sentence length, it is equivalent to our soft method. Experiments show that as k increases from 1 to 3, the performance of the Hard-k model is gradually improved. We also observe that hard weights affect the training stability of the parser. Finally, we report the results of our model on partial UD treebanks on the CoNLL 2018 shared task (Table 5). Our model uses only word and XPOS tag (predict by UDPipe), without any cross lingual features. 7 We use FastText multilingual pretrained vectors instead of Glove vectors. 8 The results show that our GNN parser performs better on 10 UD 2.2 treebanks. For bg, our parser does not improve performance. For nl, our parser improves 0.22 UAS, although LAS is slightly lower 7The results should not compare with the shared task’s official results. 8https://github.com/facebookresearch/ fastText 2482 UD Baseline Parser GNN Parser 2.2 UAS LAS UAS LAS bg 91.69 88.25 91.64 88.28 ca 92.08 89.75 92.12 89.90 cs 91.22 88.73 92.00 89.85 de 86.11 81.86 86.47 81.96 en 83.72 81.07 83.83 81.16 es 90.95 88.65 91.28 88.93 fr 86.46 83.15 86.82 83.73 it 90.70 88.80 90.81 88.91 nl 87.72 84.85 87.94 84.82 no 88.27 85.97 88.57 86.33 ro 89.07 84.18 89.11 84.44 ru 88.67 86.29 88.94 86.62 Avg. 88.89 85.96 89.13 86.24 Table 5: UAS and LAS F1 scores on 12 UD 2.2 test sets from CoNLL 2018 shared task. than the baseline parser. For average performance, it achieves 0.24 percent UAS and 0.28 percent LAS improvement over the baseline parser. 4.2 Error Analysis Following McDonald and Nivre (2011); Ma et al. (2018), we characterize the errors made by the baseline biaffine parser and our GNN parser. Analysis shows that most of the gains come from the difficult cases (e.g. long sentences or longrange dependencies), which represents an encouraging sign of the proposed method’s benefits. Sentence Length. Figure 4 (a) shows the accuracy relative to sentence length. Our parser significantly improves the performance of the baseline parser on long sentence, but is slightly worse on short sentence (length ≤10). Dependency Length. Figure 4 (b) shows the precision and recall relative to dependency length. Our parser comprehensively and significantly improves the performance of the baseline parser in both precision and recall. Root Distance. Figure 4 (c) shows the precision and recall relative to the distance to the root. Our parser comprehensively and significantly improves baseline parser’s recall. But for precision, the baseline parser performs better over long distances (≥6) than our parser. 5 Related Work Graph structures have been extended to model text representation, giving competitive results for a number of NLP tasks. By introducing context neighbors, the graph structure is added to the sequence modeling tool LSTMs, which improves performance on text classification, POS tagging and NER tasks (Zhang et al., 2018a). Based on syntactic dependency trees, DAG LSTMs (Peng et al., 2017) and GCNs (Zhang et al., 2018b) are used to improve the performance of relation extraction task. Based on the AMR semantic graph representation, graph state LSTMs (Song et al., 2018), GCNs (Bastings et al., 2017) and gated GNNs (Beck et al., 2018) are used as encoder to construct graph-to-sequence learning. To our knowledge, we are the first to investigate GNNs for dependency parsing task. The design of the node representation network is a key problem in neural graph-based parsers. Kiperwasser and Goldberg (2016b) use BiRNNs to obtain node representation with sentence-level information. To better characterize the direction of edge, Dozat and Manning (2017) feed BiRNNs outputs to two MLPs to distinguish word as head or dependent, and then construct a biaffine mapping for prediction. It also performs well on multilingual UD datasets (Che et al., 2018). Given a graph, a GNN can embed the node by recursively aggregating the node representations of its neighbors (Battaglia et al., 2018). Based on a biaffine mapping, GNNs can enhance the node representation by recursively integrating neighbors’ information. The message passing neural network (MPNN) (Gilmer et al., 2017) and the non-local neural network (NLNN) (Wang et al., 2018) are two popular GNN methods. Due to the convenience of self-attention in handling variable sentence length, we use a GAT-like network (Velikovi et al., 2018) belonging to NLNN. Then, we further explore its aggregating functions and update methods on special task. Apply the GAT to a directed complete graph similar to the Transformer encoder (Vaswani et al., 2017). But the transformer framework focuses only on head-dep-like dependency, we further explore it to capture high-order information on dependency parsing. Several works have investigated high-order features in neural parsing. Kiperwasser and Goldberg (2016b) uses a bottom-up tree-encoding to extract hard high-order features from an intermediate predicted tree. Zheng (2017) uses an incremental refinement framework to extract hard high-order features from a whole predicted tree. Ma et al. (2018) uses greedy decoding to replace the MST decoding and extract local 2-order features at the current decoding time. 2483 [1-10] [11-20][21-30][31-40][41-50] >50 Sentence Length 0.90 0.91 0.92 0.93 0.94 Accuracy baseline our (a) 1 2 3 4 5 6 7 >7 Dependency Length 0.87 0.89 0.91 0.93 0.95 Precision 1 2 3 4 5 6 7 >7 Dependency Length 0.87 0.89 0.91 0.93 0.95 Recall baseline our (b) 1 2 3 4 5 6 7 >7 Distance to Root 0.92 0.94 0.96 0.98 Precision 1 2 3 4 5 6 7 >7 Distance to Root 0.92 0.94 0.96 0.98 Recall baseline our (c) Figure 4: Parsing performance of baseline and our best parser relative to length and graph factors. Comparing with the previous work, GNNs can efficiently capture global and soft high-order features. 6 Conclusions We propose a novel and efficient dependency parser using the Graph Neural Networks. By recursively aggregating the neighbors’ information, our parser can obtain node representation that incorporates high-order features to improve performance. Experiments on PTB and UD2.2 datasets show the effectiveness of our proposed method. Acknowledgement The authors wish to thank the reviewers for their helpful comments and suggestions and Ziyin Huang, Yufang Liu, Meng Zhang and Qi Zheng for their comments on writing. This research is (partially) supported by STCSM (18ZR1411500). The corresponding authors are Yuanbin Wu and Man Lan. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A. Smith. 2016. Training with exploration improves a greedy stack LSTM parser. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2005–2010. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1957–1967. Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vin´ıcius Flores Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, C¸ aglar G¨ulc¸ehre, Francis Song, Andrew J. Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matthew Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. 2018. Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261. Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 273–283. Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 55–64, Brussels, Belgium. Association for Computational Linguistics. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 740–750. Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. 2016. Bi-directional attention with agreement for dependency parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2204–2214. 2484 Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 334–343. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96), pages 340– 345, Copenhagen. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 1263–1272. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Eliyahu Kiperwasser and Yoav Goldberg. 2016a. Easyfirst dependency parsing with hierarchical tree lstms. TACL, 4:445–461. Eliyahu Kiperwasser and Yoav Goldberg. 2016b. Simple and accurate dependency parsing using bidirectional LSTM feature representations. TACL, 4:313– 327. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one MST parser. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1744–1753. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1403–1414. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515. Association for Computational Linguistics. Ryan T. McDonald, Koby Crammer, and Fernando C. N. Pereira. 2005. Online large-margin training of dependency parsers. In ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA, pages 91–98. Ryan T. McDonald and Joakim Nivre. 2011. Analyzing and integrating dependency parsers. Computational Linguistics, 37(1):197–230. Ryan T. McDonald and Fernando C. N. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In EACL 2006, 11st Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference, April 3-7, 2006, Trento, Italy. Joakim Nivre et al. 2018. Universal Dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University, Prague, http://hdl.handle.net/ 11234/1-1983xxx. Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An effective neural network model for graph-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 313–322. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. TACL, 5:101–115. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amrto-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1616– 1626. Milan Straka, Jan Hajiˇc, and Jana Strakov´a. 2016. UDPipe: trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016), Portoro, Slovenia. European Language Resources Association. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural 2485 Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Petar Velikovi, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. 2018. Non-local neural networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 7794–7803. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 323–333. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–20, Brussels, Belgium. Association for Computational Linguistics. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 665–676. Association for Computational Linguistics. Yue Zhang, Qi Liu, and Linfeng Song. 2018a. Sentence-state LSTM for text representation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 317–327. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018b. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2205–2215. Xiaoqing Zheng. 2017. Incremental graph-based neural dependency parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1655–1665. A Hyper-Parameters Layer Hyper-parameter Value Input Word 100 POS tag 100 Glove 100 LSTM encoder layers 3 encoder size 400 MLP arc MLP size 500 rel MLP size 100 Dropout embeddings 0.33 hidden states 0.33 inputs states 0.33 MLP 0.33 Trainer optimizer Adam learning rate 0.002 (β1, β2) (0.9, 0.9) decay rate 0.75 decay step length 5000 GNN graph layers 2 Table 6: Hyper-parameters for experiments.
2019
237
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2486–2505 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2486 Wide-Coverage Neural A* Parsing for Minimalist Grammars John Torr Miloˇs Stanojevi´c Mark Steedman Shay B. Cohen School of Informatics University of Edinburgh 11 Crichton Street, Edinburgh, UK [email protected] [email protected] [email protected] [email protected] Abstract Minimalist Grammars (Stabler, 1997) are a computationally oriented, and rigorous formalisation of many aspects of Chomsky’s (1995) Minimalist Program. This paper presents the first ever application of this formalism to the task of realistic wide-coverage parsing. The parser uses a linguistically expressive yet highly constrained grammar, together with an adaptation of the A* search algorithm currently used in CCG parsing (Lewis and Steedman, 2014; Lewis et al., 2016), with supertag probabilities provided by a bi-LSTM neural network supertagger trained on MGbank, a corpus of MG derivation trees. We report on some promising initial experimental results for overall dependency recovery as well as on the recovery of certain unbounded long distance dependencies. Finally, although like other MG parsers, ours has a high order polynomial worst case time complexity, we show that in practice its expected time complexity is O(n3). The parser is publicly available.1 1 Introduction Parsers based on linguistically expressive formalisms, such as Head-Driven Phrase Structure Grammar (HPSG; Pollard and Sag 1994) and Combinatory Categorial Grammar (CCG; Steedman 1996), were shown in Rimell et al. (2009) and Nivre et al. (2010) to be more effective at recovering certain unbounded long-distance dependencies than those merely approximating human grammar with finite state or context-free covers. Such dependencies can be vital for tasks like open domain question answering, for example. Furthermore, as proven independently by Huybregts (1984) and Shieber (1985), some languages exhibit constructions which put them beyond even 1https://github.com/mgparsing/astar_ mg_parser the weak generative capacity of any context-free grammar. The investigation of parsing systems based on more powerful (mildly) context-sensitive formalisms has therefore been a very active area of research within the field of computational psycholinguistics over the past 35 years (see, e.g., Joshi 1985, 1990; Rambow and Joshi 1994; Steedman 2000; Hale 2011; Stabler 2013; Stanojevi´c and Stabler 2018). Another linguistically expressive grammatical framework is Transformational Grammar (Chomsky, 1957, 1965, 1981), the latest incarnation of which is the Minimalist Program (MP; Chomsky 1995). A defining property of MP is that constituents move. For example, in 1a below, what moves to the left periphery of the matrix clause from a deep subject position and will therefore be interpreted as the semantic AGENT of eat; in 1b, meanwhile, it moves from the deep object position and so is interpreted instead as the semantic PATIENT of eat. (1) a. Whati do you think ti eats mice? b. Whati do you think mice eat ti? MP continues to dominate much of theoretical syntax, and Stabler’s (1997) rigorous formalisation of this framework has proven a popular choice for investigations into human sentence processing (Hale, 2003; Kobele et al., 2013; Stabler, 2013; Graf and Marcinek, 2014; Graf et al., 2015; Gerth, 2015; Stanojevi´c and Stabler, 2018). On the other hand, TG has enjoyed far less popularity within computational linguistics more generally,2 which is unfortunate given that it is arguably the most extensively developed syntactic theory across the greatest number of languages, many of which are otherwise under-resourced. Conversely, the process of constructing large grammar fragments and 2For an anti-Chomskyan perspective on why this disconnect came about, see Pullum (2009). 2487 subjecting these to computational testing can have a salutary impact on the syntactic theory itself, forcing choices between competing analyses of the same construction, and exposing incompatibilities between analyses of different constructions, along with areas of over/undergeneration which may otherwise go unnoticed (Bierwisch 1963; Abney 1996; both cited in M¨uller 2016). The received wisdom within NLP is that TG/MP is too complex and insufficiently formalised to be applied to realistic parsing tasks (see M¨uller 2016 for discussion). Such assumptions prompted Sproat and Lappin (2005) to issue a challenge to the Minimalist community which has hitherto gone unanswered: to construct a widecoverage statistical parser trained in a supervised fashion and exhibiting performance that is comparable with other state-of-the-art parsers. This paper is the first to take up this challenge, and will introduce the first ever wide-coverage parser in the Minimalist (and arguably the entire TG) tradition, along with some promising initial experimental results. The parser is equipped with a linguistically expressive, wide-coverage grammar based on an extended version of Stabler’s (1997) Minimalist Grammars (MG) formalism, which is a rigorously formal, computationally oriented and polynomially parseable interpretation of mainstream MP that is weakly equivalent to Multiple Context-Free Grammars (MCFG; Seki et al. 1991). The parser itself is an adaptation of a highly efficient A* CCG parsing algorithm (Lewis and Steedman, 2014) with a bi-LSTM model trained on MGbank, an MG version of the English Penn Treebank (PTB; Marcus et al. 1993) currently under development. 2 Background Beginning in the 1960s, a number of parsers were developed which implemented aspects of the various iterations of Chomskyan syntactic theory (e.g. Petrick 1965; Zwicky et al. 1965; Woods 1970, 1973; Plath 1973; Marcus 1980; Kuhns 1990; Fong 1991; Stabler 1992; Fong and Ginsburg 2012), but most of these systems operated over relatively closed domains and were never evaluated against wide-coverage treebank test data. Principar (Lin, 1993), and its descendant Minipar (Lin, 1998, 2001), are the only truly widecoverage parsers in the Chomskyan tradition of which we are aware. Minipar incorporates MP’s bare phrase structure and some of its economy principles. It is also statistical, having been selftrained on a 1GB corpus. However, while these parsers model the phrase structure and locality constraints of TG, they are not transformational: movement is merely ‘simulat[ed]’ (Lin, 1993, page 116) by passing features up a precompiled network of nodes representing a tree, from the site of the trace to the site of the antecedent, with the latter merged directly into its surface position, in the style of GPSG. Furthermore, in this approach, antecedents necessarily c-command their traces (Lin, 1993, page 115), presumably making these parsers unsuitable for implementing MP analyses involving remnant movement (see Stabler 1999). 2.1 MG parsers A number of parsers have been developed for Stablerian MGs, which do allow for actual movement, including remnant movement. What all working MG parsers (Harkema, 2001; Hale, 2003; Stabler, 2013; Stanojevi´c and Stabler, 2018) have until now shared in common is that they are smallscale theoretical implementations equipped only with toy lexicons/grammars. There has been a limited amount of research into probabilistic MGs, notably in generative locally normalised models (Hale, 2003; Hunter and Dyer, 2013). However, these works remain so far untested owing to the unavailability, until very recently, of any MG treebank for training and evaluating models. 2.2 MGbank MGbank (Torr, 2017, 2018) is a treebank of MG derivation trees constructed in part manually by hand-annotating a subset of PTB sentences and in part automatically using a parser equipped with the manually constructed grammar and guided by the corresponding PTB and CCGbank (Hockenmaier and Steedman, 2007) structures. The corpus was continuously machine tested for over- and undergeneration throughout its development. It currently covers over 463,000 words of the PTB, or nearly 56% of its trees, and contains over 47,100 lexical entries and over 1,100 MG lexical categories. The average sentence length in MGbank is 16.9 (vs 21.7 in the PTB) and the maximum sentence length is 50. The derivation trees produced by the parser have also been transduced into Xbar and MG derived phrase structure trees. The MGbank grammar has been designed to capture many long distance dependencies not included in the original treebank, including the bind2488 ing of reflexive/reciprocal anaphors and floating quantifiers by their antecedents, the dependency between the two subconstituents of a discontinuous quoted expression (“funny thing,” says the kicker, “both these candidates are named Rudolph Giuliani.”), the licensing of polarity items such as anything, anymore and much by interrogative and negation heads (you have *(not) eaten anything), and the distributional dependency between expletive there and its obligatorily indefinite DP associate (there seem to be some/several/*the/*those problems). All of these long distance dependencies, along with those involved in control, raising, topicalization and wh movement, are integrated into the grammar itself, obviating the need for separate post-processing techniques to recover them (Johnson, 2002; Cahill et al., 2004). The MG lexical categories have also been annotated with over 100 fine-grained selectional and agreement restriction features (e.g. +3SG, -NOM, +INF, MASC, +INDEF, +FOR, MNR, +LOC, etc) to avoid many instances of unwanted overgeneration. Movement is clearly a very powerful operation. However, it is constrained here using many of the locality constraints proposed in the TG literature. These include not only Stabler’s (1997) strict version of the Shortest Move Constraint, but also a partially derelativized version (DSMC) inspired by Rizzi (1990), along with versions of the specifier/adjunct island constraints, the right roof constraint, complex NP constraint, coordinate structure constraint, that-trace filter, Principle A of Chomsky’s (1981) Binding Theory, and so on. 3 Minimalist Grammars Our parser uses the MG formalism described in Torr and Stabler (2016; henceforth T&S) and Torr (2018, 2019). Here we give only a brief overview. MGs are strongly lexicalised, with ordered feature sequences on lexical categories determining both the subcategorization frames of words and the movement operations which must apply. There are four basic types of structure building features: =x/x= selectors and x selectees, and +f licensors and -f licensees. Selectors and selectees trigger Merge operations, with x= indicating rightward selection and =x leftward selection (similar to the forward and backward slash notation in CCG). Licensors and licensees trigger Move operations. Except for a single c selectee at the root of the tree, all features entering the derivation must be checked and deleted by applying one of a small set of (here, around 45) abstract binary Merge and unary Move rules; these rules concatenate and reorder expressions’ string components. Consider the following MG lexicon. ✏, they, ✏:: d ✏, saw, ✏:: d= =d v ✏, who, ✏:: d -wh ✏, [int], ✏:: v= +WH c Each entry consists of a string component, followed by a type separator,3 followed by a sequence of syntactic features. The epsilons represent empty strings and are slots for left and right dependent strings to be merged into.4 Strings enclosed in square brackets are also empty, and appear in this form at the lexical level only simply to make the trees easier to read. Figure 1 shows the MG derivation tree for the embedded question who they saw, along with its corresponding phrase structure tree in which λ indicates an empty node position from which a phrase has moved (informally, a trace); the leaf nodes of the derivation tree are lexical items while the final surface string appears at the root node; binary nodes represent Merge operations while unary nodes represent Move operations. The interesting step occurs at the lowest binary node: because who has a -wh licensee still to check, its string is not merged into the right ✏(complement) slot of saw when these two items are Merged; instead, it is kept in a separate moving chain until its -wh feature is checked by the +WH of [int] via an application of Move. 4 The Parser Our parser uses an adaptation of the A* search algorithm for CCG presented in Lewis and Steedman (2014) (henceforth, L&S). In this section we first review that algorithm, before going on to show how it was adapted to the MG formalism. 4.1 A* CCG parsing Combinatory Categorial Grammar (CCG; Steedman 2000) is another linguistically expressive formalism capable of recovering unbounded long distance dependencies. Like MG, CCG is strongly lexicalised, with a large lexical category set and a 3:: is used for lexical items, and : for derived items. 4Heads are kept separate from their left and right dependents to allow for head movement operations (Stabler, 2001) 2489 who, ✏, they saw : c ✏, ✏, they saw : +WH c, who : -wh they, saw, ✏: v, who : -wh ✏, saw, ✏: =d v, who : -wh ✏, who, ✏:: d -wh ✏, saw, ✏:: d= =d v ✏, they, ✏:: d ✏, [int], ✏:: v= +WH c CP C0 VP V0 λi V saw D they C [int] Di who Figure 1: MG derivation tree (left) and phrase structure tree (right) for the embedded question who they saw. The derivation has been simplified for ease of exposition by removing case and head movements, as well as the null tense and light verb heads. small set of abstract combinatory rules, the most basic of which is forward/backward application (equivalent to MG’s Merge). Categories are either basic (NP, S, etc) or functional. The functional categories determine the subcategorization frame of the words they label. For example, the category for a transitive verb is (S\NP)/NP, which says that this word must combine with an (object) NP on its right (indicated by the forward slash), which will yield a category which must combine with a second (subject) NP on its left (indicated by the backward slash). In place of movement, CCG uses type raising and function composition rules to capture unbounded long distance dependencies. CCG already has a very well-established research tradition in wide-coverage parsing (see, e.g., Hockenmaier and Steedman 2002; Clark and Curran 2007b; Lewis and Steedman 2014; Xu 2016; Lewis et al. 2016; Wu et al. 2017). A key advancement in CCG parsing that enabled it to become efficient enough to support large-scale NLP tasks was the introduction of Markovian supertagging techniques in Clark and Curran (2007b) that were borrowed from Lexicalised Tree Adjoining Grammar (LTAG; Bangalore and Joshi 1999). Supertagging is essentially just part-of-speech tagging for strongly lexicalised formalisms, which have much larger tagsets than the 50 or so tags used in the PTB. Because the supertags predetermine much of the combinatorics, this is sometimes referred to as ‘almost parsing’. Inspired by the A* algorithm for PCFGs of Klein and Manning (2003), L&S present a simple yet highly effective CCG parsing model which is factored over the probabilities assigned by the lexical supertagger alone, with no explicit model of the derivation at all. This approach is highly efficient and avoids the need for aggressively pruning the search space, which degraded the performance of earlier CKY CCG parsers. Instead, the parser considers the complete distribution of the 425 most commonly occurring CCG lexical categories for each word. The supertagger was originally a unigram log-linear classifier, but Lewis et al. (2016) greatly enhanced its accuracy by exchanging this for a stacked bi-LSTM neural model. The key difference between A* and CKY CCG parsing is the fact that A* uses search heuristics that avoid building the whole chart without compromising the correctness guarantees. This is achieved using an agenda implemented as a priority queue of items ranked by their cost, calculated as a product of their inside cost and an upper bound on their expected outside cost. The agenda is initialised with the full set of 425 supertags for each word. The parser pops the item with the lowest cost from the agenda, stores it in the chart if it is not already there, and attempts to combine it with other items already present the chart. Newly created items have their costs calculated before being added to the priority queue agenda. The entire process is repeated until a complete parse for the sentence is returned. The algorithm guarantees that the first parse returned is the most probable (i.e. the Viterbi parse) according to the model. L&S treat a CCG parse y as a list of lexical categories c0. . .cn−1 together with a derivation, and make the simplifying assumptions that all derivations licensed by the grammar are equally likely, and that the probability of a given lexical category assignment is conditionally independent of all the other assignments given the sentence. Let Y be the set of all derivations licensed by the grammar; then the optimal parse ˆy for a given sentence S with words w0. . .wn−1 is given as: ˆy = argmaxy2Y n−1 Y i=0 p(ci | S) (1) 2490 Let ↵be a set of indices {i,..,j} for words wi...wj labelled with category sequence ci...cj inside some expression. The inside probability of ↵is simply the product of the probabilities of the lexical category assignments given the sentence. s(↵) = Y i2↵ p(ci | S) (2) The upper bound estimate for the outside probability of a span ↵is given by h(↵) = Y i/2↵ max ci p(ci | S) (3) where maxci p(ci | S) is the probability of the most likely category assigned to word wi according to the supertagger, which can be precomputed for the sentence and cached. To avoid numerical errors caused by multiplying together very small numbers, we convert the probabilities to log space costs and use addition rather than multiplication. 4.2 A* MG parsing The simplicity, speed and performance of L&S’s A* CCG parser made it attractive for a first implementation of a wide-coverage MG parser. However, while CCG and MG are similar in some respects5 (such as the fact that they are both strongly lexicalised), there are also some fundamental differences between the formalisms which mean that some adaptations are needed in order to port this A* algorithm to MGs. The first (trivial) issue is that MG derivations feature discontinuous spans in order to allow for movement, as we saw in Figure 1. Therefore, we must redefine ↵in Equations 2 and 3 to be the set of word indices covered by all the spans contained within an MG expression. The second issue is that, following T&S, the MGbank grammar allows for so-called Acrossthe-Board (ATB) head and phrasal movements in order to capture adjunct control, parasitic gaps, and certain coordination structures. ATB phrasal movement is illustrated in 2 below. (2) Whoi did Jack say Mary likes ti and Pete hates ti? In 2, who has moved from two separate base generated object positions in across-the-board fashion. T&S (adapting ideas in Kobele 2008) propose to account for this by initially generating 5See Berwick and Epstein (1995) on the convergence of Minimalist syntax and Categorial Grammar. two instances of who in the two object positions and then later unifying them into a single item when the second conjunct is merged into the main structure. For A*, when two expressions containing unifiable movers are merged together, only one of those movers must contribute to the cost of the resulting expression in order to avoid excessive penalisation for what is now just a single instance of the moving item. We can achieve this for both ATB head and phrasal movement by first calculating the sum of the costs of the two expressions that are Merged, and then subtracting from this the cost of one member of each pair of unified movers. In the MGbank grammar (unlike in Kobele 2008), it can be the case that two unified (head) movers have different derivational histories, in which case they may well have different costs6. In such cases, the parser uses the greater of these two costs when calculating the inside cost of the newly formed expression. If the lower of the two costs were used instead, it may make some costs non-monotonically increasing.7 The final problem relates to the fact that, unlike CCG, MG allows for phonetically null heads (following mainstream MP), but supertaggers can only tag the overt words of a sentence. However, we would like our probability model to also be defined over the null heads. Addressing this problem, Torr (2018) proposes an algorithm for extracting a set of complex LTAG-like MG lexical supertag categories from a corpus of MG derivation trees, which we adopt here. Each supertag contains precisely one overt atomic MG lexical item and zero or more atomic null heads anchored to it. For example, in Figure 1, the [int] head would be included inside the supertag anchored by saw. The supertagging model can now be refactored over these complex, overt MG categories; the parser continues to manipulate the atomic categories, but now keeps track of the fact that the v= of [int] must obligatorily be checked by the v feature of (this specific instance of) saw, and vice versa. During parsing, the overt heads carry the entire cost of their supertag into the agenda; the null heads are simply assigned a zero cost. Pseudocode for the A* MG parser can be found in Appendix A. 6Another difference is that T&S do not adopt the GPSGstyle slash feature mechanism used in Kobele (2008). 7Note that one drawback to only using the cost of one of the two unified instances is that the strict optimality guarantees of A* are lost. 2491 5 Experiments 5.1 Model description We used two types of MG grammars in our experiments: Abstract and Reified. The difference between them is that in the Abstract grammar, most of the 100 or so fine-grained selectional and agreement restriction features have been removed with the exception of the following 5 features, which are necessary to the inner workings of the parser: ANA, EDGE, IT, +NONE, MAIN. The Reified grammar is clearly more constrained, which should make it more precise (at some expense to recall) but at the same time more difficult to supertag correctly due to the sparsity that comes with a higher number of supertags. Extracting the complex MG supertags from the entire MGbank corpus resulted in a Reified tagset of 3926 items and an Abstract tagset of 2644 items.8 For both Abstract and Reified we used the same supertagging neural architecture that works by initially embedding the word tokens using the final layer of an ELMo embedder (Peters et al., 2018), followed by a single affine transformation to compress the embeddings into a vector of size 128 for each word. These embeddings are further fed into a two layer bi-LSTM (Hochreiter and Schmidhuber, 1997; Graves, 2013). Finally, the hidden states of the final layer of the bi-LSTM are passed through a two layer MLP to predict the distribution of the supertags for each word. The parameters are trained using an Adam optimizer with a learning rate of 0.0002. 5.2 Recovering MGBank dependencies We first tested the parser on its ability to recover global syntactic and semantic (local and non-local) dependencies extracted from MGbank. We extracted labelled and unlabelled bi-lexical dependencies for each binary non-terminal in the Xbar phrase structure trees transduced from the MG derivation trees.9 To make up for the short8This number of tags is closer to the 4727 elementary trees of the TAG treebank of Chen (2001) than to CCGbank’s (Hockenmaier and Steedman, 2007) 1286 lexical categories. 9As in Collins (1999), the labels are triples of the parent, non-head child and head child categories. The dependencies include both local dependencies and those created by movement, hence this evaluation is more akin to the deep dependency evaluation discussed in Clark et al. (2002) for CCG than to the more standard practice of evaluating parsers in terms of just local dependencies (e.g. Collins 1999). The semantic head of the clause is taken to be the main verb, while its syntactic head, if present, is the overt complemenmodel F1 P R E syntax LAB Abstract 79.33 81.87 76.94 21.01 Reified 80.10 83.43 77.02 21.61 ULAB Abstract 84.57 87.15 82.14 29.59 Reified 85.19 88.63 82.02 30.49 semantics LAB Abstract 74.90 77.17 72.75 20.96 Reified 75.47 78.53 72.64 21.56 ULAB Abstract 83.69 86.16 81.36 33.30 Reified 84.11 87.47 81.01 34.50 Table 1: Results on the whole MGbank test set with P, R and E indicating precision, recall and exact match respectively. fall in the number of trees in MGbank, we used both sections 00 and 01 for development and both sections 23 and 24 for testing, with sections 02-22 used for training. Table 1 shows the results on the MGbank test set. On both dependency types, the Reified model has higher precision, F1-score and exact matching, but has a lower score on recall owing to the constraining impact of the selectional and agreement features: The Abstract model returned parses for 1924 sentences out of 1998 in the test set (i.e. 96.5%), while the Reified model returned 1902 (i.e. 95.4%). The F1 scores in table 1 are respectable for a first attempt at wide-coverage MG parsing, though it should be noted that the MGbank test set is somewhat easier than the PTB test set owing to the difference of 4.8 in average sentence length between the two corpora. 5.3 Comparison to CCG Cross-formalism comparison is in general a difficult task (Clark and Curran, 2007a) because it is necessary to account both for (1) the differences in how the parsers work and (2) the differences in the kinds of structures they predict. To control for (1) we re-implemented a CCG parser similar to L&S’s CCG A* algorithm but using our supertagger to make the comparison fair. We first trained our CCG supertagger on the CCG trees from CCGbank, but only on those sentences that are also present in MGbank. We then tested the CCG parser on the recovery of CCGbank dependencies for the test sentences also appearing in tizer; similarly, nouns are taken to be semantic heads of PPs and DPs while their syntactic heads are the preposition and determiner respectively; the semantic heads of coordination structures are the conjuncts themselves, while the syntactic head is the coordinator. Unlabelled dependencies are also undirected, as is standard practice in CCG evaluation. 2492 model F1 P R E LAB Our CCG A* 87.4 87.2 87.6 40.0 EasyCCG A* 83.8 87.2 80.7 31.4 ULAB Our CCG A* 92.8 92.5 93.0 47.2 EasyCCG A* 90.1 93.8 86.8 35.9 Table 2: Results of CCG parsers on all 1994 sentences of MGbank test set for CCG dependencies. MGbank, and compared this to an off the shelf CCG parser, namely EasyCCG, that was trained over the whole of the CCGbank training set. The results are shown in Table 2. Our CCG parser shows much better performance in spite of being trained on much less data than EasyCCG, making it a tough point of comparison for our MG parser. To account for (2) we compared the CCG and MG parsers on their ability to recall the dependencies for which both CCGbank and MGbank agree by taking as the test set the intersection of the gold unlabelled undirected CCGbank and syntactic MGbank dependencies for sentences appearing in the MGbank test set. Precision cannot be computed due to the difficulties in normalising predictions on the CCG and MG sides: one might predict more dependencies which may be correct but are not predicted by the syntactic theory used in the other parser and therefore would be penalised. The results of this evaluation are shown in Table 4. The CCG parser clearly exhibits superior performance, although the MG parser performs respectably given that it is up against a near state-ofthe-art parser for a formalism with a much longer history in wide-coverage parsing. The higher performance of the CCG parser is likely the result of a more complete search due to the lower complexity of the formalism (the CCG parser parsed all sentences) and of the much smaller supertag set that is easier to predict as evident in Table 3. This means that the MG parser requires a larger amount of training data than the CCG parser to achieve similar levels of accuracy and efficiency (because the speed of A* parsing depends on the quality of the probabilistic model). We tried replacing all MG supertags occurring less than twice in the training data with UNK tags to reduce the noise from unreliable tags, but this hurt performance. Once MGbank’s coverage is increased, the difference between the formalisms may narrow. The MG parser is a prototype Python implementation, and to keep parsing times practical the search space was pruned so that only the 40 most top k CCG MG Abstract MG Reified 1 95.73 83.11 80.62 5 99.41 97.22 95.89 10 99.64 98.42 97.66 20 99.78 99.01 98.42 40 99.83 99.26 98.81 Table 3: Supertagging accuracies for each grammar as the probability of having the correct supertag in the topk predictions per word. parser R E CCG A* 95.30 69.03 MG Abstract A* 91.75 54.38 MG Reified A* 92.65 55.67 Table 4: Results on overlapping gold CCGbank and syntactic MGbank dependencies in sections 23 and 24. likely supertags per word were retained. Even so, the parser still timed out on a few sentences in the test set. Once reimplemented in a faster language, its recall should increase as it will have more time to explore a less aggressively pruned search space. 5.4 Parsing speed The CKY MG parser of Harkema (2001), when augmented with head movement, has a worst case time complexity of O(n4k+12) where k is the maximum number of phrasal movers that can be contained in any single expression. In the MGbank formalism, owing to DSMC, k = 4 (see Torr 2019), meaning that the worst case complexity of parsing with this formalism using Harkema’s algorithm would be O(n28). Our A* parsing algorithm operates in a similar fashion, except that it takes an additional multiplicative cost of O(log n) due to the usage of a heap data structure for implementing the agenda. O(n28 log n) is, of course, a prohibitively high time complexity. However, although A* does not improve on the worst case theoretical complexity of CKY, it can dramatically improve its practical expected complexity. Figure 2 shows the scatter plot of parsing times for different sentence lengths and the average curve. The average curve is less informative in very long sentences due to the smaller number of parses, but in regions where there are more data points a clear pattern can be observed: a cubic polynomial curve approximates average time taken to parse sentences extremely well, which means that the expected time complexity of MG 2493 0 10 20 30 40 0 2 4 6 8 10 words minutes average 0.00012 n3 Figure 2: Parsing speed for Abstract model on test set. parsing with our grammar and statistical model is O(n3). This is much better than the worst case analysis, although the variance is high, with some sentences still requiring a very long time to parse. Recently, Stanojevi´c (2019) has shown that with relatively small adjustments to the parser’s inference rules, MGs with head movement can be parsed in O(n2k+5) time in the worst case,10 which for the MGbank grammar equates to O(n13), a dramatic improvement over O(n28). We hope to leverage these efficiency gains in the future to improve the expected time complexity of the parser. 5.5 Coverage Section 00 of the PTB contains 1921 sentences with an average sentence length of 21.9 words; other than a 212 word outlier, the maximum sentence length is 96. When run over all of these sentences, the Reified parser returned parses for 1490 (77.6%) sentences with an average sentence length of 14 and a maximum sentence length of 53. The Abstract parser returned 1549 parses (80.6%) with an average sentence length of 15.3 and a maximum sentence length of 49. The CCG A* parser returned 1909 parses (99.4%). 5.6 Recovery of unbounded dependencies As noted in Section 1, the recovery of unbounded dependencies, including wh-object questions, is a 10Fowlie and Koller (2017) previously demonstrated that MGs without head movement could be parsed in O(n2k+3) worst case time, which was already a dramatic improvement over Harkema’s original result. However, Stanojevi´c (2019) shows that adding head movement to Fowlie and Koller’s system increases complexity to O(n2k+9). primary motivation for using linguistically expressive parsers in NLP. Wh-object questions themselves are extremely rare in the PTB, but object relative clauses, which also involve unbounded movement, are relatively frequent. Following Clark et al. (2004), we manually evaluated our parser on the free and non-free object (and embedded subject) relative clauses in section 00 of the PTB, as well as on the two examples of so-called tough movement. The MGbank analyses of these constructions are discussed in Appendix B. There are 24 examples of non-free object relative dependencies across 20 sentences in section 00, and 17 free object relative dependencies across 16 sentences. All of these sentences, along with indications of which dependencies our parser did and did not recover, are given in Appendix C, and are presented using the MGbank tokenization used by the MG A* parser (the CCG A* parser used the original CCGbank tokenization). On the free object relatives, our Abstract parser performed best, recovering 13/17 dependencies. The parser only predicted 14 free object relatives meaning that the precision was 13/14. Of the 4 free object relative dependencies in the data which it missed, 3 were in very long sentences on which the parser timed out (the time-out was set to 30 mins), suggesting that a faster re-implementation may achieve higher recall. In the one case which the parser actually got wrong, it correctly identified that there was a free object relative dependency, but extracted the wrong object from a double object verb. Clark et al. (2004) reported recall of 14/17 (with precision 14/15), while our A* CCG parser recovered 15.5/17 of the free object relative dependencies with precision also 15.5/17. Non-free object relatives are harder than both wh object questions and free object relatives because they require a head noun to be identified in addition to an extraction site. Our Abstract parser performed best here, retrieving 10/24; the CCG A* parser recovered 15/24, with precision of 15/21 (Clark et al. (2004) also reported recall of 15/24 and precision of 15/20). Our Reified parser retrieved 13/24 with precision 13/17 when allowed to reparse any sentences it initially failed to find any analyses for with increasingly relaxed tag-dictionary settings. In two of the errors, the parser correctly identified the extraction site, but attached the relative clause to the wrong NP. For example, in sentence 1, the parser attached whom 2494 Sony hosted for a year to complaint rather than to American. Appositive relative clauses such as this are treated as involving adjunction of the relative clause to the head noun in MGbank, and the choice of attachment to either American or complaint is underdetermined by the model (the same supertag containing the requisite [rel] and [adjunctizer] heads will be assigned to hosted in either case).11 For the restrictive relative clause in sentence 8, the parser incorrectly assigned the supertag containing the [relativizer] null head (which causes the noun to undergo promotion) to the noun esteem rather than to damage, hence the problem here originates with the scores assigned by the supertagger. In the other two errors, the parser incorrectly predicted an object extraction dependency, again owing to tagging mistakes. We also evaluated on the 2 tough movement examples in section 00, one of which is shown below. (3) ThatA i got hard [CP tA0 i to take tA i ]. Tough movement is of linguistic interest because it arguably involves a DP licensed in two case positions as well as so-called improper movement, in which an A0-movement step feeds subsequent A-movement. In order to generate tough movements, MGbank uses a null [op] head which has the effect of a unary type-changing rule mapping an ordinary DP into a DP with additional Aand A0-movement licensees. Our parser failed to correctly analyse either of the two examples in section 00 owing to supertagging errors. For example, in 3 there are three important tagging decisions to be made: hard must be assigned the supertag for a tough adjective, that the supertag for a pronoun which undergoes tough movement,12 and take the supertag for a transitive verb. The highest scoring tag assigned to hard by the Abstract supertagger was the supertag for a regular adjective that takes a CP complement (eager to help). The correct tough adjective supertag, meanwhile only ranked 14th, meaning that the A* search algorithm never got to consider it. Furthermore, the highest ranked tag for take was the supertag for an unergative intransitive verb; the correct transitive verb tag appeared in second place. Finally, the supertag for a pronoun undergoing tough movement was not included in the 40 11One way to resolve such ties would be to augment the supertag-factored model with a head-dependency model. 12This supertag contains both the overt category assigned to that and the [op] null head (see Figure 6) in Appendix B. tags assigned to that owing to the fact that this supertag did not appear in the training data at all. We tried increasing the 8 examples of tough movement in the training data to 18 examples (including one example with that as the tough mover) by performing some additional hand annotation of PTB sentences. This bolstered the tough adjective supertag to 10th position, while the tough movement supertag for that now appeared in 28th position, but this was not enough to enable the parser to correctly recover the tough movement analysis. Our A* CCG parser scored 1/2 (the same as Clark et al. 2004); its higher performance is no doubt due to the much smaller tag set and the fact that CCG does not require special supertags for tough-moved DPs. 6 Conclusion We have presented the first ever wide-coverage Transformational Grammar parser. The results of this initial attempt are optimistic. First, the accuracy on recovering syntactic and semantic dependencies predicted by the Minimalist syntax is reasonable considering the higher complexity of the mechanisms behind Minimalism compared to other formalisms. In comparison to CCG, a formalism with a much longer history of widecoverage parsing, performance currently lags behind. However, the gap will likely narrow as the size and quality of MGbank improves and as better probabilistic models are developed enabling these systems to parse a higher number of sentences. Another important and optimistic result of this investigation is that Minimalist Grammar parsing is not as slow as may have been expected given its worst case time complexity. Worst case complexity results are sometimes raised as a criticism of TG theories. Our results show that the combination of a good neural probabilistic model and A* search, together with a strong formal grammar, makes Minimalist parsing practical for the majority of sentences. Acknowledgments The first author was supported by an EPSRC PhD studentship, the second author by an ERC H2020 Advanced Fellowship GA 742137 SEMANTAX grant, the third author by a Google Faculty Award, and the fourth author by a Bloomberg award. We would also like to thank the three anonymous reviewers for their helpful feedback. 2495 References Steven Abney. 1996. Statistical methods and linguistics. In The Balancing Act: Combining Symbolic and Statistical Approaches to Language, pages 1– 26. MIT Press. Srinivas Bangalore and Aravind Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25:237–265. Robert C Berwick and Samuel D Epstein. 1995. Computational minimalism: The convergence of the minimalist syntactic program and categorial grammar. TWLT-10: Algebraic Methods in Language Processing, Enschede, the Netherlands. Rajesh Bhatt. 2002. The raising analysis of relative clauses: Evidence from adjectival modification. Natural language semantics, 10(1):43–90. Manfred Bierwisch. 1963. Grammatik des deutschen Verbs. Akademie Verlag. Michael Brody. 1993. ✓-theory and arguments. Linguistic Inquiry, pages 1–23. A Cahill, M Burke, R O’Donovan, J van Genabith, and A Way. 2004. Long-distance dependency resolution in automatically acquired wide-coverage pcfg-based lfg approximations. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 320–327, Barcelona, Spain. Association for Computational Linguistics. John Chen. 2001. Towards efficient statistical parsing using lexicalized grammatical information. Ph.D. thesis, University of Delaware. Noam Chomsky. 1957. Syntactic Structures. Mouton, The Hague. Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA. Noam Chomsky. 1977. On wh-movement. Formal syntax, pages 71–132. Noam Chomsky. 1981. Lectures on Government and Binding. Foris, Dordrecht. Noam Chomsky. 1995. The Minimalist Program. MIT Press, Cambridge, Massachusetts. Noam Chomsky and Howard Lasnik. 1977. Filters and control. Linguistic inquiry, 8(3):425–504. Stephen Clark and James Curran. 2007a. Formalismindependent parser evaluation with ccg and depbank. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 248–255. Association for Computational Linguistics. Stephen Clark and James R. Curran. 2007b. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33:493–552. Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building deep dependency structures with a wide-coverage ccg parser. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 327–334. Association for Computational Linguistics. Stephen Clark, Mark Steedman, and James R Curran. 2004. Object-extraction and question-parsing using ccg. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Sandiway Fong. 1991. The computational implementation of principle-based parsers. In Robert Berwick, Steve Abney, and Carol Tenny, editors, Principle-Based Parsing, pages 65–82. Kluwer, Dordrecht. Sandiway Fong and Jason Ginsburg. 2012. Computation with doubling constituents: Pronouns and antecedents in phase theory. In Anna Maria Di Sciullo, editor, Towards a Biolinguistic understanding of grammar: Essays on interfaces, pages 303–338. John Benjamins. Meaghan Fowlie and Alexander Koller. 2017. Parsing minimalist languages with interpreted regular tree grammars. In Proceedings of the Thirteenth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+13), pages 11–20. Association for Computational Linguistics. Michael L Fredman and Robert Endre Tarjan. 1987. Fibonacci heaps and their uses in improved network optimization algorithms. Journal of the ACM (JACM), 34(3):596–615. Sabrina Gerth. 2015. Memory limitations in sentence comprehension. Ph.D. thesis, University of Potsdam. Thomas Graf, Brigitta Fodor, James Monette, Gianpaul Rachiele, Aunika Warren, and Chong Zhang. 2015. A refined notion of memory usage for minimalist parsing. In Proceedings of the 14th Meeting on the Mathematics of Language (MoL 2015), pages 1–14. Association for Computational Linguistics. Thomas Graf and Bradley Marcinek. 2014. Evaluating evaluation metrics for minimalist parsing. In Proceedings of the Fifth Workshop on Cognitive Modeling and Computational Linguistics, pages 28–36. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. John Hale. 2003. Grammar, Uncertainty and Sentence Processing. Ph.D. thesis, Johns Hopkins University. John T Hale. 2011. What a rational parser would do. Cognitive Science, 35(3):399–443. 2496 Hendrik Harkema. 2001. Parsing Minimalist Languages. Ph.D. thesis, UCLA, Los Angeles, California. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Julia Hockenmaier and Mark Steedman. 2002. Generative models for statistical parsing with combinatory categorial grammar. In In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 335–342. Association for Computational Linguistics. Julia Hockenmaier and Mark Steedman. 2007. Ccgbank: A corpus of ccg derivations and dependency structures extracted from the penn treebank. Computational Linguistics, 33(3):355–396. Norbert Hornstein. 2001. Move! A Minimalist Theory of Construal. Blackwell Publishing. Tim Hunter and Chris Dyer. 2013. Distributions on minimalist grammar derivations. In Proceedings of the 13th Meeting on the Mathematics of Language (MoL 13), pages 1–11, Sofia, Bulgaria. The Association of Computational Linguistics. Riny Huybregts. 1984. The weak inadequacy of context-free phrase-structure grammars. In Ger de Haan, Mieke Trommelen, and Wim Zonneveld, editors, Van Periferie naar Kern, pages 81–99. Foris, Dordrecht. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 136–143, Philadelphia. Association for Computational Linguistics. Aravind Joshi. 1985. Tree-adjoining grammars. In David Dowty, Lauri Karttunen, and Arnold Zwicky, editors, Natural Language Parsing, pages 206–250. Cambridge University Press, Cambridge. Aravind K Joshi. 1990. Processing crossed and nested dependencies: An automation perspective on the psycholinguistic results. Language and cognitive processes, 5(1):1–27. Richard S. Kayne. 1994. The Antisymmetry of Syntax, Linguistic Inquiry Monograph Twenty-Five. MIT Press, Cambridge, Massachusetts. Dan Klein and Christopher D Manning. 2003. A* parsing: fast exact viterbi parse selection. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 40–47. Association for Computational Linguistics. Gregory M Kobele. 2008. Across-the-Board Extraction in Minimalist Grammars. In Proceedings of the Ninth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+9), volume 9, pages 113–128. Association for Computational Linguistics. Gregory M Kobele, Sabrina Gerth, and John Hale. 2013. Memory resource allocation in top-down minimalist parsing. In International Conference on Formal Grammar, pages 32–51. Springer, Association for Computational Linguistics. Robert J. Kuhns. 1990. A PARLOG implementation of government-binding theory. In 13th International Conference on Computational Linguistics, COLING 1990, University of Helsinki, Finland, August 20-25, 1990, pages 394–396. Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. Lstm CCG parsing. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 221–231. Association for Computational Linguistics. Mike Lewis and Mark Steedman. 2014. A* ccg parsing with a supertag-factored model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 990–1000. Dekang Lin. 1993. Principle-based parsing without overgeneration. In Proceedings of the 31st annual meeting on Association for Computational Linguistics, pages 112–120. Association for Computational Linguistics. Dekang Lin. 1998. Dependency-based evaluation of minipar. In Proceedings of the Workshop on the Evaluation of Parsing Systems, First International Conference on Language Resources and Evaluation, Granada, Spain. Dekang Lin. 2001. Latat: Language and text analysis tools. In Proceedings of the first international conference on Human language technology research, pages 1–6. Association for Computational Linguistics. Wolfgang Maier, Miriam Kaeshammer, and Laura Kallmeyer. 2012. Plcfrs parsing revisited: Restricting the fan-out to two. In Proceedings of the 11th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+ 11), pages 126–134. Association for Computational Linguistics. Mitch Marcus, Beatrice Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330. Mitchell P Marcus. 1980. Theory of syntactic recognition for natural languages. MIT press. 2497 Stefan M¨uller. 2016. Grammatical theory: From transformational grammar to constraint-based approaches. Language Science Press. Mark-Jan Nederhof. 2003. Weighted deductive parsing and knuth’s algorithm. Computational Linguistics, 29(1):135–143. Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos Gomez-Rodriguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 833–841. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Stanley Roy Petrick. 1965. A recognition procedure for transformational grammars. Ph.D. thesis, Massachusetts Institute of Technology. Warren J Plath. 1973. Transformational grammar and transformational parsing in the request system. In COLING 1973 Volume 2: Computational And Mathematical Linguistics: Proceedings of the International Conference on Computational Linguistics, volume 2. Carl Pollard and Ivan Sag. 1994. Head Driven Phrase Structure Grammar. CSLI Publications, Stanford, CA. Geoffrey Pullum. 2009. Computational linguistics and generative linguistics: The triumph of hope over experience. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics, pages 12–21, Athens, Greece. Lawrence Erlbaum Associates. Andrew Radford. 2004. Minimalist Syntax: Exploring the Structure of English. Cambridge University Press. Owen Rambow and Aravind K. Joshi. 1994. A processing model for free word order languages. In C. Clifton Jr., L. Frazier, and K. Rayner, editors, Perspectives on Sentence Processing. L. Erlbaum, Hillsdale, NJ. Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 813–821, Singapore. Association for Computational Linguistics. Luigi Rizzi. 1990. Relativized minimality. The MIT Press. Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context free grammars. Theoretical Computer Science, 88:191– 229. Stuart Shieber. 1985. Evidence against the contextfreeness of natural language. Linguistics and Philosophy, 8:333–343. Richard Sproat and Shalom Lappin. 2005. A challenge to the Minimalist community. Linguist List, 16:1156. Edward P. Stabler. 1992. The logical approach to syntax: foundations, specifications, and implementations of theories of government and binding. MIT Press. Edward P. Stabler. 1997. Derivational minimalism. In Logical Aspects of Computational Linguistics (LACL’96), volume 1328 of Lecture Notes in Computer Science, pages 68–95, New York. Springer. Edward P. Stabler. 1999. Remnant movement and complexity. In Gosse Bouma, Erhard Hinrichs, GeertJan M. Kruijff, and Richard Oehrle, editors, Constraints and resources in natural language syntax and semantics, volume 2, pages 299–326. CSLI Stanford, CA. Edward P. Stabler. 2001. Recognizing head movement. In Logical Aspects of Computational Linguistics: 4th International Conference, LACL 2001, Le Croisic, France, June 27-29, 2001, Proceedings., volume 4, pages 245–260. Edward P. Stabler. 2013. Two models of minimalist, incremental syntactic analysis. Topics in Cognitive Science, 5:611–633. Miloˇs Stanojevi´c and Edward Stabler. 2018. A sound and complete left-corner parsing for minimalist grammars. In Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing, pages 65–74. Miloˇs Stanojevi´c. 2019. On the computational complexity of head movement and affix hopping. In Formal Grammar 2019. Springer Berlin Heidelberg. Mark Steedman. 1996. Surface Structure and Interpretation. Linguistic Inquiry Monograph 30. MIT Press, Cambridge, MA. Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, Massachusetts. John Torr. 2017. Autobank: a semi-automatic annotation tool for developing deep minimalist grammar treebanks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Software Demonstrations, Valencia, Spain, April 3-7 2017, pages 81–86. Association for Computational Linguistics. 2498 John Torr. 2018. Constraining mgbank: Agreement, lselection and supertagging in minimalist grammars. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 590–600. Association for Computational Linguistics. John Torr. 2019. Wide-Coverage Statistical Parsing with Minimalist Grammars. Ph.D. thesis, University of Edinburgh. John Torr and Edward P. Stabler. 2016. Coordination in minimalist grammars: Excorporation and across the board (head) movement. In Proceedings of the Twelfth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+12), pages 1–17. Association for Computational Linguistics. William A Woods. 1970. Transition network grammars for natural language analysis. Communications of the ACM, 13(10):591–606. William A Woods. 1973. An experimental parsing system for transition network grammars. Natural language processing, pages 111–154. Huijia Wu, Jiajun Zhang, and Chengqing Zong. 2017. A dynamic window neural network for ccg supertagging. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), pages 3337–3343. Wenduan Xu. 2016. Lstm shift-reduce ccg parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1754–1764. Association for Computational Linguistics. Arnold M Zwicky, Joyce Friedman, Barbara C Hall, and Donald E Walker. 1965. The mitre syntactic analysis procedure for transformational grammars. In Proceedings of the November 30–December 1, 1965, fall joint computer conference, part I, pages 317–326. ACM. A Appendix: Pseudocode for the MG A* algorithm Algorithm 1 MG parser A* algorithm 1: while agenda is not empty do 2: item1 deleteMax(agenda) 3: if item1 is goal item then 4: return item1 5: else if item1 /2 chart then 6: add(chart, item1) 7: R [ ] 8: if can move item1 then 9: add(R, move(item1)) 10: for item2 2 chart do 11: if can merge item1 and item2 then 12: add(R, merge(item1, item2)) 13: for item 2 R do 14: if item /2 {chart [ agenda} then 15: add(agenda, item) 16: else if item 2 agenda then 17: updateWeight(agenda, item) The A* search algorithm presented in Algorithm 1 is an adaptation of the weighted deductive parsing approach (Nederhof, 2003; Maier et al., 2012) to Minimalist Grammars. It uses two data structures, an agenda and a chart. The agenda is implemented as a priority queue with support for an increase-key operation. Concretely, we use a Fibonacci heap (Fredman and Tarjan, 1987), but many other types of heap could be used for the same purpose. The chart is currently organised similarly to that of standard CKY in that it constitutes the uppertriangular portion of an (n + 1) x (n + 1) matrix, where n is the length of the string, and each cell [i, j] in this matrix references some span from position i to position j in the input string. However, whereas in standard CKY, these cell indices reference the span of an entire expression, in our MG parser they reference only an expression’s narrow yield, i.e. all those indices which are not part of some span which is undergoing or may undergo movement. For example, the narrow yield of the TP expression in 4 below is the set of indices corresponding to the words Jack and gone there (shown in bold face). The moving chain why is excluded from the narrow yield, as is the head string has because, depending on the type of complementizer which selects for this TP, has may un2499 dergo head movement to yield why has Jack gone there, or not undergo head movement to yield, e.g., you know why Jack has gone there. (4) Jack, has, gone there : t, why : -wh Expressions within each cell are also currently placed into bins according to the first feature of their head chain, so that when the system encounters a t= feature, for example, it only needs to consider merging this expressions with other expressions whose first feature is t. The call updateWeight(agenda, item) finds the current (backpointer, weight) pair of item in the agenda and compares it to the newly constructed (backpointer, weight) pair. The weight includes both the inside and outside scores. Only the pair with a lower weight is kept in the agenda. This update is made efficient by using an additional hashtable and the increase-key heap operation. B Appendix: MGbank analyses of relative clauses and tough movement The MGbank analysis of restrictive relative clauses is illustrated in phrase structural terms for the phrase the book of ghost stories which Jack read in Figure 3; the derivation tree for the simpler phrase the book which Jack read is shown in Figure 4. This analysis is inspired by an analysis in Bhatt (2002) and departs from that of Kayne (1994), where the wh determiner and the NP form a constituent in both the deep and surface structure (with the NP moving to the specifier of the wh DP to derive the correct word ordering). One reason for preferring Bhatt’s analysis is that the wh item appears to form a constituent with the rest of the clause, as evidenced by the fact that it can form a conjunct with it: the book [which Jack wrote] and [which Mary read]. Bhatt (pages 79-81) suggests that the head noun moves to the left periphery and projects an NP layer over the clause, but does not specify what features drive this movement. MGbank uses a null type changing [relativizer] head to introduce a -n licensee onto the head noun which is then attracted to the +N of a [nom] head that selects the clause as its complement and projects the clausal NP layer. The [nom] head is needed here because in the MGbank formalism it is only possible for a specifier to project its fine-grained selectional properties and requirements (MASC, +3SG, -INF etc), not its selectee (n) category, hence the type of projecting movement Bhatt proposes must be precompiled into the lexicon. Note that relative that is often treated as a complementizer rather than as a relative pronoun in MP (Radford, 2004, pages 228-230). When present in MGbank relatives, it therefore appears in the slot occupied by the null [decl] head in Figure 3, with a null [wh] head playing a similar role to the overt wh item in this example (selectional restrictions ensure that the grammar does not overgenerate examples like the book which that Jack read which violate the Doubly Filled Comp Filter (Chomsky and Lasnik, 1977)). Free relatives, as in I like [what you’re reading], have a very similar analysis, but project only as far as CP (as they lack any head noun) and are then selected for by a null determiner head. Appositive relatives, as in the book, which you’ve read, is on the table, receive a head external analysis, again projecting only as far as CP and then adjoining to their head noun. Figures 5 and 6 show the phrase structure and derivation trees for the tough movement example that got hard to take, which is one of the two examples of tough movement found in section 00 of the PTB, where it is embedded inside the larger sentence “that got hard to take,” he added. It has generally been assumed since (Chomsky, 1977, 1981) that the infinitival clause is a type of relative clause with a null constituent in its left periphery that is co-indexed both with the object trace and the subject of the tough adjective. This null constituent is in fact included in the original PTB, although it is generally just ignored by treebank parsers. MGbank follows Brody (1993) and Hornstein (2001) in treating it as a trace of movement. C Appendix: The PTB section 00 relative clause examples Figures 7 and 8 show all the examples of free and non-free (non-reduced) relative clauses in section 00 of the PTB, and indicate which ones our best models did and did not correctly analyse. 2500 DP NP N0 CP C0 CP TP T0 vP v0 VP V0 λk ⇤m λk v v [trans] Vm read λl T [past] DPl Jack C [decl] C [rel] DPk λi D which N [nom] NPi NP PP P0 DPj NP N stories N ghost D [det] P of µj N book N [relativizer] D the Figure 3: MGbank’s phrase structural analysis of the phrase the book of ghost stories which Jack read, which contains a restrictive relative clause as the complement of the determiner the. The tree has been simplified in certain respects, for instance by removing the successive cyclic wh movement through spec-vP which is assumed in MP and included in the actual MGbank trees. ⇤indicates a trace of head movement, λ indicates a trace of overt phrasal movement, and µ indicates the landing site of a covert movement. 2501 book, ✏, which Jack read : n{3SG} ✏, ✏, which Jack read : +N{x} n{x}, book : -n{3SG} which, ✏, Jack read : c{RELAT}, book : -n{3SG} ✏, ✏, Jack read : +WH c{RELAT}, book : -n{3SG}, which : -wh{3SG} ✏, ✏, Jack read : c, book : -n{3SG}, which : -wh{3SG} Jack, ✏, read : t, book : -n{3SG}, which : -wh{3SG} ✏, ✏, read : +CASE t, book : -n{3SG}, which : -wh{3SG}, Jack : -case ✏, read, ✏: lv, book : -n{3SG}, which : -wh{3SG}, Jack : -case ✏, read, ✏: =d lv, book : -n{3SG}, which : -wh{3SG} ✏, read, ✏: v, book : -n{3SG}, which : -wh{3SG} ✏, read, ✏: +CASE v, book : -n{3SG}, which : -case{3SG} -wh{3SG} ✏, which, ✏: D{3SG} -case{3SG} -wh{3SG}, book : -n{3SG} ✏, ✏, book : n{3SG.REL} -n{3SG} ✏, book, ✏:: n{3SG} ✏, [relativizer], ✏:: n{x}= n{REL.x} -n{x} ✏, which, ✏:: n{x}= D{x} -case{x} -wh{x} ✏, read, ✏:: d= +CASE v ✏, [trans], ✏:: >v= =d lv ✏, Jack, ✏:: D -case ✏, [past], ✏:: lv= +CASE t ✏, [decl], ✏:: t= c ✏, [rel], ✏:: c{+DECL}= +WH c{RELAT} ✏, [nom], ✏:: c{+RELAT}= +N{x} n{x} Figure 4: A derivation tree for the bracketed NP in the [book which Jack read]. Irrelevant selectional and agreement features are omitted to save space. 2502 CP TP T0 vP v0 adjP adj0 CP C0 CP TP vP v0 VP V0 λi ⇤k λi v v [trans] Vk take D [pro-d] T to C [decl] C [rel] λi adj hard λi v got λi T [past] DPi D0 Dj that D [op] µj C [decl] Figure 5: Derived Xbar tree showing MGbank’s analysis for the phrase that got hard to take with tough movement. The tree has been simplified here by removing the successive cyclic wh movement through spec-vP that is standardly assumed in MP and is included in the actual MGbank trees. Note that µ indicates the landing site of a covert movement. 2503 that, got, hard to take : t{PAST} ✏, ✏, got hard to take : +CASE{+NOM} t{PAST}, that : -case{ACC.NOM.3SG} ✏, got, hard to take : lv{PAST}, that : -case{ACC.NOM.3SG} ✏, got, hard to take : =d lv{PAST}, that : D{3SG} -case{ACC.NOM.3SG} ✏, hard, to take : adj, that : D{3SG} -case{ACC.NOM.3SG} ✏, hard, to take : +TOUGH adj, that : -tough D{3SG} -case{ACC.NOM.3SG} ✏, ✏, to take : c{RELAT}, that : -tough D{3SG} -case{ACC.NOM.3SG} ✏, ✏, to take : +WH c{RELAT}, that : -wh -tough D{3SG} -case{ACC.NOM.3SG} ✏, ✏, to take : c, that : -wh -tough D{3SG} -case{ACC.NOM.3SG} ✏, to, take : t, that : -wh -tough D{3SG} -case{ACC.NOM.3SG} ✏, take, ✏: lv{BARE.TRANS}, that : -wh -tough D{3SG} -case{ACC.NOM.3SG} ✏, take, ✏: =d lv{BARE.TRANS}, that : -wh -tough D{3SG} -case{ACC.NOM.3SG} ✏, take, ✏: v{BARE.TRANS}, that : -wh -tough D{3SG} -case{ACC.NOM.3SG} ✏, take, ✏: +CASE{+ACC} v{BARE.TRANS}, that : -case{ACC} -wh -tough D{3SG} -case{ACC.NOM.3SG} ✏, ✏, that : D{OP.3SG} -case{ACC} -wh -tough D{3SG} -case{ACC.NOM.3SG} ✏, ✏, that : +case{y} D{OP.3SG} -case{ACC} -wh -tough D{3SG} -case{y}, ✏: -case{ACC.NOM.3SG} ✏, that, ✏:: D{3SG} -case{ACC.NOM.3SG} ✏, [op], ✏:: d{-OP.x}= +case{y} D{OP.x} -case{ACC} -wh -tough D{x} -case{y} ✏, take, ✏:: d= +CASE{+ACC} v{BARE.TRANS} ✏, [trans], ✏:: >v{+TRANS.x}= =d lv{x} ✏, [pro-d], ✏:: D ✏, to, ✏:: lv{+BARE}= t ✏, [decl], ✏:: t= c ✏, [rel], ✏:: c= +WH c{RELAT} ✏, hard, ✏:: c{+RELAT}= +TOUGH adj ✏, got, ✏:: adj= =d lv{PAST} ✏, [past], ✏:: lv{+PAST.x}= +CASE{+NOM.x} t{x} Figure 6: MG derivation tree for the sentence that got hard to take. Lowercase licensors, such as +case, trigger covert movement (see Torr and Stabler 2016). 2504 1. The survey found that nearly half of Hong Kong consumers espouse what it identified as materialistic values compared with about one-third in Japan and the U.S. 2. What she did was like taking the law into your own hands 3. We work damn hard at what we do for damn little pay and what she did cast unfair aspersions on all of us 4. There may be others doing what she did 5. The U.S. wants the removal of what it perceives as barriers to investment ; Japan denies there are real barriers 6. But they have n’t clarified what those might be 7. Deregulation has effectively removed all restrictions on what banks can pay for deposits as well as opened up the field for new products such as high - rate CDs 8. Mr. Martin said they have n’t yet decided what their next move would be but he did n’t rule out the possibility of a consent solicitation aimed at replacing Georgia Gulf ’s board 9. What matters is what advertisers are paying per page and in that department we are doing fine this fall said Mr. Spoon w.o. 10. What this tells us is that U.S. trade law is working he said t.o. 11. The paper accused him of being a leading proponent of peaceful evolution a catch phrase to describe what China believes is the policy of Western countries to seduce socialist nations into the capitalist sphere t.o. 12. Despite the harsh exchanges the U.S. and China still seem to be looking for a way to mend relations which have deteriorated into what Mr. Nixon referred to as the greatest crisis in Chinese - American relations since his initial visit to China num years ago 13. Judge Ramirez num said it is unjust for judges to make what they do 14. Judges are not getting what they deserve t.o. 15. Composer Marc Marder a college friend of Mr. Lane ’s who earns his living playing the double bass in classical music ensembles has prepared an exciting eclectic score that tells you what the characters are thinking and feeling far more precisely than intertitles or even words would 16. We have and I ’m sure others have considered what our options are and we ’ve had conversations with people who in the future might prove to be interesting partners Figure 7: The 16 sentences with free object relative clause dependencies in section 00 of the PTB. Each tick indicates a point awarded for the correct identification of the extraction site of the wh word; t.o. indicates that the parser timed out before returning a parse, and w.o. indicates that the parser correctly identified an object relative dependency but extracted the wrong object of a double object verb. Our Abstract parser correctly identified 13/17 dependencies with a precision of 13/14. Our A* CCG parser correctly recovered 15.5/17 of these dependencies with precision 15.5/17 (we awarded the CCG parser half a point for sentence 15 because it related what to thinking but not feeling, which it analysed as intransitive). Note that sentence 3 contains two free object relative clauses. 2505 1. It ’s the petulant complaint of an impudent American whom Sony hosted for a year while he was on a Luce Fellowship in Tokyo – to the regret of both parties 2. It said the man whom it did not name had been found to have the disease after hospital tests 3. Commonwealth Edison now faces an additional court-ordered refund on its summerwinter rate differential collections that the Illinois Appellate Court has estimated at $ num million 4. But Rep. Marge Roukema -LRB- R. N.J -RRB- instead praised the House ’s acceptance of a new youth training wage a subminimum that GOP administrations have sought for many years 5. Democratic Lt. Gov. Douglas Wilder opened his gubernatorial battle with Republican Marshall Coleman with an abortion commercial produced by Frank Greer that analysts of every political persuasion agree was a tour de force 6. Against a shot of Monticello superimposed on an American flag an announcer talks about the strong tradition of freedom and individual liberty that Virginians have nurtured for generations 7. Another was Nancy Yeargin who came to Greenville in num full of the energy and ambitions that reformers wanted to reward 8. Mostly she says she wanted to prevent the damage to self - esteem that her low - ability students would suffer from doing badly on the test 9. Mrs. Ward says that when the cheating was discovered she wanted to avoid the morale damaging public disclosure that a trial would bring 10. Mr. Sherwood speculated that the leeway that Sea Containers has means that Temple would have to substantially increase their bid if they ’re going to top us 11. A high - balance customer that banks pine for she did n’t give much thought to the rates she was receiving nor to the fees she was paying 12. Interviews with analysts and business people in the U.S. suggest that Japanese capital may produce the economic cooperation that Southeast Asian politicians have pursued in fits and starts for decades 13. Interpublic Group said its television programming operations – which it expanded earlier this year – agreed to supply more than num hours of original programming across Europe in num 14. Interpublic is providing the programming in return for advertising time which it said will be valued at more than $ num million in num and $ num million in num 15. Mrs. Hills said many of the num countries that she placed under varying degrees of scrutiny have made genuine progress on this touchy issue 16. The Japanese companies bankroll many small U.S. companies with promising products or ideas frequently putting their money behind projects that commercial banks wo n’t touch 17. In investing on the basis of future transactions a role often performed by merchant banks trading companies can cut through the logjam that small - company owners often face with their local commercial banks 18. He described the situation as an escrow problem a timing issue which he said was rapidly rectified with no losses to customers 19. In CAT sections where students ’ knowledge of two - letter consonant sounds is tested the authors noted that Scoring High concentrated on the same sounds that the test does – to the exclusion of other sounds that fifth graders should know 20. The events of April through June damaged the respect and confidence which most Americans previously had for the leaders of China Figure 8: The 20 sentences with non-free object relative clause dependencies in section 00 of the PTB. Our reified parser correctly recovered 13/24 of these (with precision of 13/17) by using a tag dictionary threshold initially set to 5. If the parser did not find a parse, then this was increased to 10 and the sentence reparsed. If a parse was still not found, the tag dictionary was turned off completely and a final parse attempted (on the single run, with no tag dictionary, our abstract parser performed best, retrieving 10/24 dependencies; the CCG A* parser returned 15/24 with precision 15/20).
2019
238
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2506–2515 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2506 Multi-Modal Sarcasm Detection in Twitter with Hierarchical Fusion Model Yitao Cai, Huiyu Cai and Xiaojun Wan Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University Center for Data Science, Peking University {caiyitao,hy cai,wanxiaojun}@pku.edu.cn Abstract Sarcasm is a subtle form of language in which people express the opposite of what is implied. Previous works of sarcasm detection focused on texts. However, more and more social media platforms like Twitter allow users to create multi-modal messages, including texts, images, and videos. It is insufficient to detect sarcasm from multi-model messages based only on texts. In this paper, we focus on multimodal sarcasm detection for tweets consisting of texts and images in Twitter. We treat text features, image features and image attributes as three modalities and propose a multi-modal hierarchical fusion model to address this task. Our model first extracts image features and attribute features, and then leverages attribute features and bidirectional LSTM network to extract text features. Features of three modalities are then reconstructed and fused into one feature vector for prediction. We create a multi-modal sarcasm detection dataset based on Twitter. Evaluation results on the dataset demonstrate the efficacy of our proposed model and the usefulness of the three modalities. 1 Introduction Merriam Webster defines sarcasm as “a mode of satirical wit depending for its effect on bitter, caustic, and often ironic language that is usually directed against an individual”. It has the magical power to disguise the hostility of the speaker (Dews and Winner, 1995) while enhancing the effect of mockery or humor on the listener. Sarcasm is prevalent on today’s social media platforms, and its automatic detection bears great significance in customer service, opinion mining, online harassment detection and all sorts of tasks that require knowledge of people’s real sentiment. Twitter has become a focus of sarcasm detection research due to its ample resources of publicly available sarcastic posts. Previous works on Twitter sarcasm detection focus on the text modality and propose many supervised approaches, including conventional machine learning methods with lexical features (Bouazizi and Ohtsuki, 2015; Pt´aˇcek et al., 2014), and deep learning methods (Wu et al., 2018; Baziotis et al., 2018). However, detecting sarcasm with only text modality can never be certain of the true intention of the simple tweet “What a wonderful weather!” until the dark clouds in the attached picture (Figure 1(a)) are seen. Images, while are ubiquitous on social platforms, can help reveal (Figure 1(a)), affirm (Figure 1(b)) or disprove the sarcastic nature of tweets, thus are intuitively crucial to Twitter sarcasm detection tasks. In this work, we propose a multi-modal hierarchical fusion model for detecting sarcasm in Twitter. We leverage three types of features, namely text, image and image attribute features, and fuse them in a novel way. During early fusion, the attribute features are used to initialize a bi-directional LSTM network (Bi-LSTM), which is then used to extract the text features. The three features then undergo representation fusion, where they are transformed into reconstructed representation vectors. A modality fusion layer performs weighted average to the vectors and pumps them to a classification layer to yield the final result. Our results show that all three types of features contribute to the model performance. Furthermore, our fusion strategy successfully refines the representation of each modality and is significantly more effective than simply concatenating the three types of features. Our main contributions are summarized as follows: • We propose a novel hierarchical fusion model to address the challenging multi-modal sar2507 (a)“What a wonderful weather!” (b)“Yep, totally normal <user>. Nothing is off about this. Nothing at all. #itstoohotalready #climatechangeisreal” Figure 1: Examples of image modality aiding sarcasm detection. (a) The image is necessary for the sarcasm to be spotted due to the contradiction of dark clouds in the image and “wonderful weather” in the text; (b) The image affirms the sarcastic nature of the tweet by showing the weather is actually very “hot” and is not at all “totally normal”. casm detection task in Twitter. To the best of our knowledge, we are the first to deeply fuse the three modalities of image, attribute and text, rather than na¨ıve concatenation, for Twitter sarcasm detection. • We create a new dataset for multi-modal Twitter sarcasm detection and release it1. • We quantitatively show the significance of each modality in Twitter sarcasm detection. We further show that to fully unleash the potential of images, we would need to consider image attributes - a high-level abstract information bridging the gap between texts and images. 2 Related Works 2.1 Sarcasm Detection Various methods have been proposed for sarcasm detection from texts. Earlier methods extract carefully engineered discrete features from texts (Davidov et al., 2010; Riloff et al., 2013; Pt´aˇcek et al., 2014; Bouazizi and Ohtsuki, 2015), including n-grams, word’s sentiment, punctuations, emoticons, part-of-speech tags, etc. More recently, researchers leverage the powerful techniques of deep learning to get more precise semantic representations of tweet texts. Ghosh and Veale (2016) propose a model with CNN and RNN layers. Besides the tweet content in question, contextual features such as historical behaviors of the author and the audience serve as a good indicator for 1https://github.com/headacheboy/data-of-multimodalsarcasm-detection sarcasm. Bamman and Smith (2015) make use of human-engineered author, audience and response features to promote sarcasm detection. Zhang, Zhang and Fu (2016) concatenate target tweet embeddings(obtained by a Bi-GRU model) with manually engineered contextual features, and show fair improvement compared to completely featurebased systems. Amir et al. (2016) exploit trainable user embeddings to enhance the performance of a CNN classification model. Poria et al. (2016) use the concatenated output of CNNs trained on tweets and pre-trained on emotion, sentiment, personality as the inputs for the final SVM classifier. Y. Tay et al. (2018) come up with a novel multi-dimensional intra-attention mechanism to explicitly model contrast and incongruity. Wu et al. (2018) construct a multi-task model with densely connected LSTM based on embeddings, sentiment features and syntactic features. Baziotis et al. (2018) ensemble a word based bidirectional LSTM and a character based bidirectional LSTM to capture both semantic and syntactic features. However, little has been revealed by far on how to effectively combine textual and visual information to boost performance of Twitter sarcasm detection. Schifanella et al. (2016) simply concatenate manually designed features or deep learning based features of texts and images to make prediction with two modalities. Different from this work, we propose a hierarchical fusion model to deeply fuse three modalities. 2.2 Other Multi-Modal Tasks Sentiment analysis is a related task with sarcasm detection. Many researches on multi-modal sen2508 timent analysis deal with video data (Wang et al., 2016; Zadeh et al., 2017), where text, image and audio data can usually be aligned and support each other. Though inputs are different, their fusion mechanisms can be inspiring to our task. Poria, Cambria, and Gelbukh (2015) use multiple kernel learning to fuse different modalities. Zadeh et al. (2017) build their fusion layer by outer product instead of simple concatenation in order to get more features. Gu et al. (2018b) align text and audio at word level and apply several attention mechanisms. Gu et al. (2018a) first introduce modality fusion structure attempting to reveal the actual importance of multiple modalities, but their methods are quite different from our hierarchical fusion techniques. Inspiration can also be drawn from other multimodal tasks, such as visual question answering (VQA) tasks where a frame of image and a query sentence are provided as model inputs. A question-guided attention mechanism is proposed in VQA tasks (Chen et al., 2015) and can boost model performance compared to those using global image features. Attribute prediction layer is introduced (Wu et al., 2016) as a way to incorporate high-level concepts into the CNN-LSTM framework. Wang et al. (2017) exploit a handful of off-the-shelf algorithms, gluing them with a co-attention model and achieve generalizability as well as scalability. Yang et al. (2014) try image emotion extraction tasks with image comments and propose a model to bridge images and comment information by learning Bernoulli parameters. 3 Proposed Hierarchical Fusion Model Figure 2 shows the architecture of our proposed hierarchical fusion model. In this work, we treat text, image and image attribute as three modalities. Image attribute modality has been shown to boost model performance by adding high-level concept of the image content (Wu et al., 2016). Modality fusion techniques are proposed to make full use of the three modalities. In the following paragraph, we will first define raw vectors and guidance vectors, and then briefly introduce our hierarchical fusion techniques. For the image modality, we use a pre-trained and fine-tuned ResNet model to obtain 14 × 14 regional vectors of the tweet image, which is defined as the raw image vectors, and average them to get our image guidance vector. For the (image) attribute modality, we use another pre-trained and fine-tuned ResNet models to predict 5 attributes for each image, the GloVe embeddings of which are considered as the raw attribute vectors. Our attribute guidance vector is a weighted average of the raw attribute vectors. We use Bi-LSTM to obtain our text vectors. The raw text vectors are the concatenated forward and backward hidden states for each time step of the Bi-LSTM, while the text guidance vector is the average of the above raw vectors. In the belief that the attached image could aid the model’s understanding of the tweet text, we apply non-linear transformations on the attribute guidance vector and feed the result to the Bi-LSTM as its initial hidden state. This process is named early fusion. In order to utilize multimodal information to refine representations of all modalities, representation fusion is proposed in which feature vectors of the three modalities are reconstructed using raw vectors and guidance vectors. The refined vectors of three modalities are combined into one vector with weighted average instead of simple concatenation in the process of modality fusion. Lastly, the fused vector is pumped into a two layer fully-connected neural network to obtain classification result. More details of our model are provided below. 3.1 Image Feature Representation We use ResNet-50 V2 (He et al., 2016) to obtain representations of tweet images. We chop the last fully-connected (FC) layer of the pre-trained model and replace it with a new one for the sake of model fine-tuning. Following (Wang et al., 2017), a input image I is re-sized to 448 × 448 and divided into 14 × 14 regions. Each region Ii (i = 1, 2 . . . , 196) is then sent through the ResNet model to obtain a regional feature representation vregioni, a.k.a. a raw image vector. vregioni = ResNet(Ii) As is described before, the image guidance vector vimage is the average of all regional image vectors. vimage = PNr i=1 vregioni Nr where Nr is the number of regions and is 196 in this work. 2509 text modality YUM hospital cafeteria food – at Lake Charles Memorial Hospital Raw Vectors Input tweet image modality attribute modality Guidance Vectors Bi-LSTM Reconstructed Feature Vectors regional image vectors Representation Fusion fork knife seat white meat Early Fusion Representation Fusion Representation Fusion Representation Fusion Fused vector (for classification) Modality Fusion GloVe embedding Figure 2: Overview of our proposed model 3.2 Attribute Feature Representation Previous work (Wu et al., 2016) in image captioning and visual question answering introduces attributes as high-level concepts of images. In their work, single-label and multi-label losses are proposed to train the attribute prediction CNN, whose parameters are transferred to generate the final image representation. While they use parameter sharing for better image representation with attributelabeling tasks, we take a more explicit approach. We treat attributes as an extra modality bridging the tweet text and image, by directly using the word embeddings of five predicted attributes of each tweet image as the raw attribute vectors. We first train an attribute predictor with ResNet101 and COCO image captioning dataset (Lin et al., 2014). We build the multi-label dataset by extracting 1000 attributes from sentences of the COCO dataset. We use a ResNet model pretrained on ImageNet (Russakovsky et al., 2015) and fine-tune it on the multi-label dataset. Then the attribute predictor is used to predict five attributes ai (i = 1, . . . , 5) for each image. We generate the attribute guidance vector by weighted average. Raw attribute vectors e(ai) are passed through a two-layer neural network to obtain the attention weights αi for constructing the attribute guidance vector vattr. The related equations are as follows. αi = W2 · tanh(W1 · e(ai) + b1) + b2 α = softmax(α) vattr = Na X i=1 αie(ai) where ai is the ith image attribute, literally a word out of a vocabulary of 1000; e is the GloVe embedding operation; W1 and W2 are weight matrices; b1 and b2 are biases; Na is the number of attributes, and is 5 in our settings. 3.3 Text Feature Representation Bidirectional LSTM (Bi-LSTM) (Hochreiter and Schmidhuber, 1997) are used to obtain the representation of the tweet text. The equations of operations performed by LSTM at time step t are as follows: it = σ(Wi · xt + Ui · ht−1) ft = σ(Wf · xt + Uf · ht−1) ot = σ(Wo · xt + Uo · ht−1) ˜ct = tanh(Wc · xt + Uc · ht−1) ct = ft ⊙ct−1 + it ⊙˜ct ht = ot ⊙tanh(ct) where Wi, Wf, Wo, Ui, Uf, Uo are weight matrices; xt, ht are input state and hidden state at time step t, respectively; σ is the sigmoid function; ⊙ denotes element-wise product. The text guidance 2510 vector is the arithmetic average of hidden states in each time step. vtext = PL i=1 ht L where L is the length of the tweet text. 3.4 Early Fusion The Bi-LSTM initial states are usually set to zeroes in text classification tasks, but it is a potential spot where multi-modal information could be infused to promote the modal’s comprehension of the text. In the proposed model, we apply the nonlinearly transformed attribute guidance vector as the initial state of Bi-LSTM. [hf0; hb0; cf0; cb0] = ReLu(W · vattr + b) where hf0, cf0 are forward LSTM initial states and hb0, cb0 are backward LSTM initial states; [; ] is vector concatenation; ReLu denotes elementwise application of the Rectified Linear Units activation function; W and b are weight matrix and bias. We also try to use image guidance vector for early fusion, in which the LSTM initial states are obtained with means similar to the one described above, but it does not perform very well, as will be discussed in the experiments. 3.5 Representation Fusion Inspired by attention mechanism in VQA tasks, representation fusion aims at reconstructing the feature vectors vimage, vtext, vattr with the help of low-level raw vectors (namely, the hidden states of time step t {ht} for the text modality, the 196 regional vectors for the image modality, and the five attribute embeddings for the attribute modality) and high-level guidance vectors from different modalities. We denote X(i) m as the ith raw vector from modality m (which may be text, image or attribute). The key in this stage is to calculate the weight for each X(i) m . The weighted average then becomes the new representation of modality m. To leverage as much information as possible and more accurately model the relationship between multiple modalities, we exploit information from all three modalities - more explicitly, guidance vectors vn where n could be text, image or attribute, when calculating the weights of raw vectors in each modality. For the ith raw vector of each modality m, we calculate three guided weights α(i) mn from the guidance vectors of different modalities n. The final reconstruction weight for the raw vector is the average of the normalized guided weights. α(i) mn = Wmn2 · tanh(Wmn1 · [X(i) m ; vn] + bmn1) + bmn2 αmn = softmax(αmn) α(i) m = P n∈{text, image, attr} α(i) mn 3 vm = Lm X i=1 α(i) m X(i) m where m, n ∈{text, image, attr} denote modalities; α(i) mn is the guided weight for the ith raw vector of modality m under the guidance of modality n, and αmn contains all α(i) mn of all raw vectors of modality m under the guidance of modality n; α(i) m is the final reconstruction weight for the ith raw vector of modality m; Lm is the length of sequence {X(i) m }; Wmn1 ,Wmn2 are weight matrices and bmn1, bmn2 are biases. After representation fusion, vimage, vtext, vattr, previously denoted as guidance vectors, are now considered feature vectors of each modality and ready to serve as inputs of the next layer. 3.6 Modality Fusion Instead of simply concatenating the feature vectors from different modalities to form a longer vector, we perform modality fusion motivated by the work of (Gu et al., 2018a). The feature vector for each modality m, denoted as vm, is first transformed into a fixed-length form v′ m. A twolayer feed-forward neural network is implemented to calculate the attention weights for each modality m, which is then used in the weighted average of transformed feature vectors v′ m. The result is a single, fixed-length vector vfused. ˜αm = Wm2 · tanh(Wm1 · vm + bm1) + bm2 ˜α = softmax(˜α) v′ m = tanh(Wm3 · vm + bm3) vfused = X m∈{text, image, attr} ˜αmv′ m where m is one of the three modalities and ˜α is a vector containing ˜αm; Wm1, Wm2, Wm3 are 2511 Training Development Test sentences 19816 2410 2409 positive 8642 959 959 negative 11174 1451 1450 Table 1: Statistics of our dataset weight matrices. bm1, bm2, bm3 are biases; vm represents reconstructed feature vectors in the representation fusion process. 3.7 Classification layer We use a two layer fully-connected neural network as our classification layer. The activation function of the hidden layer and the output layer are element-wise ReLu and sigmoid functions, respectively. The loss function is cross entropy. 4 Dataset and Preprocessing There is no publicly available dataset for evaluating the multi-modal sarcasm detection task, and thus we build our own dataset, which will be released later. We collect and preprocess our data similar to (Schifanella et al., 2016). We collect English tweets containing a picture and some special hashtag (e.g., #sarcasm, etc.) as positive examples (i.e. sarcastic) and collect English tweets with images but without such hashtags as negative examples (i.e. not sarcastic). We further clean up the data as follows. First, we discard tweets containing sarcasm, sarcastic, irony, ironic as regular words. We also discard tweets containing URLs in order to avoid introducing additional information. Furthermore, we discard tweets with words that frequently co-occur with sarcastic tweets and thus may express sarcasm, for instance jokes, humor and exgag. We divide the data into training set, development set and test set with a ratio of 80%:10%:10%. In order to evaluate models more accurately, we manually check the development set and the test set to ensure the accuracy of the labels. The statistics of our final dataset are listed in table 1. For preprocessing, we first replace mentions with a certain symbol ⟨user⟩. We then separate words, emoticons and hashtags with the NLTK toolkit. We also separate hashtag sign # from hashtags and replace capitals with their lowercases. Finally, words appearing only once in the training set and words not appearing in the training set but appearing in the development set or test Hyper-parameters Value LSTM hidden size 256 Batch size 32 Learning rate 0.001 Gradient Clipping 5 Early stop patience 5 Word and attribute embedding size 200 ResNet FC size 1024 Modality fusion size 512 LSTM dropout rate 0.2 Classification layer l2 parameters 1e-7 Table 2: Hyper-parameters set are replaced with a certain symbol ⟨unk⟩. 5 Experiments 5.1 Training Details Pre-trained models. The pre-trained ResNet model is available online. The word embeddings and attribute embeddings are trained on the Twitter dataset using Glove (Pennington et al., 2014). Fine tuning. Parameters of the pre-trained ResNet model are fixed during training. Parameters of word and attribute embeddings are updated during training. Optimization. We use the Adam optimizer (Kingma and Ba, 2014) to optimize the loss function. Hyper-parameters. The hidden layer size in the neural networks described in the fusion techniques is half of its input size. Other hyper-parameters are listed in table 2. 5.2 Comparison Results Table 3 shows the comparison results (F-score and Accuracy) of baseline models and our proposed model. We implement models with one or multiple modalities as baseline models. We also present the results of na¨ıve solution (all negative, random) of this task. Random. It randomly predicts whether a tweet is sarcastic or not. Text(Bi-LSTM). Bi-LSTM is one of the most popular method for addressing many text classification problems. It leverages a bidirectional LSTM network for learning text representations and then uses a classification layer to make prediction. Text(CNN). CNN is also one of the state-of-theart methods to address text classification problems. We implement text CNN (Kim, 2014) as a baseline model. 2512 Model F-score Pre Rec Acc All negative 0.6019 Random 0.4470 0.4005 0.5057 0.5027 Text(Bi-LSTM) 0.7753 0.7666 0.7842 0.8190 Text(CNN) 0.7532 0.7429 0.7639 0.8003 Image 0.6153 0.5441 0.7080 0.6476 Attr 0.6334 0.5606 0.7278 0.6646 Concat(2) 0.7799 0.7388 0.8259 0.8103 Concat(3) 0.7874 0.7336 0.8498 0.8174 Our model 0.8018 0.7657 0.8415 0.8344 Table 3: Comparison results Image. Image vectors after the pooling layer of ResNet are inputs of the classification layer. We only update parameters of the classification layer. Attr. Since image attribute is one of the modalities in our proposed model, we also try to use only attribute features to make prediction. The attribute feature vectors are inputs of the classification layer. Concat. Previous work (Schifanella et al., 2016) concatenates different feature vectors of different modalities as the input of the classification layer. We implement this concatenation model with our feature vectors of different modalities and apply it for classification. The number in parentheses is the number of modalities we use. (2) means concatenating text features and image features, while (3) means concatenating all text, image and attribute features. We can see that the models based only on the image or attribute modality do not perform well, while Text(Bi-LSTM) and Text(CNN) models perform much better, indicating the important role of text modality. The Concat(3) model outperforms Concat(2), because adding attributes as a new modality actually introduces external semantic information of images and helps the model when it fails to extract valid image features. Our proposed hierarchical fusion model further improves the performance and achieves the state-of-the-art scores, revealing that our fusion model leverages features of three modalities in a more effective way. We further apply sign tests between our proposed model and Text(Bi-LSTM), Concat(2), Concat(3) models. The null hypotheses are that our proposed model doesn’t perform better than each baseline model. The statistics of the sign tests are listed in table 4. All significance levels are less than 0.05. Therefore, all of the null hypotheses is rejected and our proposed model significantly perConcat(3) Concat(2) Text(Bi-LSTM) t+ 106 149 120 t− 65 91 83 p 0.0011 0.0001 0.0057 Table 4: Statistics of sign tests. (t+ is the number of tweets that our proposed model predicts them right but baseline models do not. t−is the number of tweets that baseline models predict them right but our proposed model does not. p is the significance value.) forms better than baseline models. 5.3 Component Analysis of Our Model We further evaluate the influence of early fusion, representation fusion, as well as different modality representation in early fusion on the final performance. The evaluation results are listed in Table 5. F-score Pre Rec Acc w/o EF 0.7880 0.7570 0.8217 0.8240 w/o RF 0.7902 0.7456 0.8405 0.8223 EF(img) 0.7787 0.7099 0.8624 0.8049 Our model 0.8018 0.7657 0.8415 0.8344 Table 5: Ablation study. ‘w/o’ means removal of this component. EF denotes early fusion. RF denotes representation fusion. EF(img) means using image guidance vectors for early fusion. We can see that the removal of early fusion decreases the performance, which shows that early fusion can improve the text representation. Early fusion with attribute representation performs better than that with image representation, indicating the gap between text representation and image representation. If representation fusion is removed, the performance is also decreased, which indicates that representation fusion is necessary and that the representation fusion can refine the feature representation of each modality. 6 Visualization Analysis 6.1 Running Examples Figure 3 shows some sarcastic examples that our proposed model predicts them correctly while the model with only text modality fails to label them right. It shows that with our model, images and attributes can contribute to sarcasm detection. For example, an image with a dangerous tackle and a text saying ’not dangerous’ convey strong sarcasm in example (a). ’Respectful customers’ is contradicted to the messy parcels as well as the attribute 2513 (a) this isn 't dangerous . going to teach my players to tackle like this at practice first thing tomorrow morning . (b) i love respectful customers (c) <user> your counselor is so cute . glad you 're staffing up so well . attributes: field players playing soccer men attributes: pile stack messy sitting boxes attributes: teddy bear wearing hat brown Figure 3: Examples of sarcastic tweets Weather ‘s lookin amazing today • ... Attributes: houses street sitting trees near happy testing monday ! eat a good breakfast # serious yum hospital cafeteria food – at lake charles memorial hospital Attributes: cloth sitting colored table meat Attributes: fork knife sitting white meat (a) (b) (c) Figure 4: Attention visualization of sarcastic tweets ’messy’ in example (b). Without images, successfully detecting these sarcasm instances is almost impossible. The model with only text modality fails to detect sarcasm as for example (c), though the word so is repeated several times in example (c). However, with image and attribute modalities, our proposed model correctly detects sarcasm in these tweets. 6.2 Attention Visualization Figure 4 shows the attention of some examples at the representation fusion stage. Our model can successfully focus on the appropriate parts of the image, the essential words in the sentences and the important attributes. For example, our model pays more attention on the unamused face emoji and the word ’amazing’ for texts, and pays more attention on the gloomy sky in example (a), thus this tweet is predicted as sarcastic tweet because of the inconsistency of these two modalities. In example (b), our model focuses on the word ’serious’ in texts and focuses on the simple meal in the picture that contradicts to the ’good breakfast’, revealing that this tweet should be sarcastic. In example (c), the word ’yum’, the attribute ’meat’ and the food in the image indicate the sarcastic meaning of the tweet. 6.3 Error Analysis Figure 5 shows an example that our model fails to label it right. yo <user> thanks for the yearly fee reminder! Here's to you! #planetfitness #hiddenfee #mrmet Attributes: ball holding shoes little white Figure 5: Example of misclassified samples In the example, the insulting gesture in the picture is contrast to the phrase ’thanks for’. However, the model is unable to obtain the common sense that this gesture is insulting. Therefore, the attention of this picture does not focus on the insulting gesture. Moreover, attributes do not reveal the insulting meaning of the pictures as well, thus our model fails to predict this tweet as sarcastic. 7 Conclusion and Future Work In this paper we propose a new hierarchical fusion model to make full use of three modalities (images, texts and image attributes) to address the challenging multi-modal sarcasm detection task. Evaluation results demonstrate the effectiveness of our proposed model and the usefulness of the three modalities. In future work, we will incorporate other modality such as audio into the sarcasm detection task and we will also investigate to make use of common sense knowledge in our model. 2514 Acknowledgment This work was supported by National Natural Science Foundation of China (61772036) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. References Silvio Amir, Byron C. Wallace, Hao Lyu, Paula Carvalho, and M´ario J. Silva. 2016. Modelling context with user embeddings for sarcasm detection in social media. CoRR, abs/1607.00976. David Bamman and Noah A. Smith. 2015. Contextualized sarcasm detection on twitter. In ICWSM. Christos Baziotis, Nikos Athanasiou, Pinelopi Papalampidi, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, and Alexandros Potamianos. 2018. Ntua-slp at semeval-2018 task 3: Tracking ironic tweets using ensembles of word and character level attentive rnns. arXiv preprint arXiv:1804.06659. M. Bouazizi and T. Ohtsuki. 2015. Sarcasm detection in twitter: ”all your products are incredibly amazing!!!” - are they really? In 2015 IEEE Global Communications Conference (GLOBECOM), pages 1–6. Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. 2015. Abccnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, CoNLL ’10, pages 107–116, Stroudsburg, PA, USA. Association for Computational Linguistics. Shelly Dews and Ellen Winner. 1995. Muting the meaning a social function of irony. Metaphor and Symbol, 10(1):3–19. Aniruddha Ghosh and Dr. Tony Veale. 2016. Fracking sarcasm using neural network. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 161–169. Association for Computational Linguistics. Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, and Ivan Marsic. 2018a. Hybrid attention based multimodal network for spoken language classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2379–2390. Association for Computational Linguistics. Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, and Ivan Marsic. 2018b. Multimodal affective analysis using hierarchical attention strategy with word-level alignment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2225–2235. Association for Computational Linguistics. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Identity mappings in deep residual networks. CoRR, abs/1603.05027. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Yoon Kim. 2014. Convolutional neural networks for sentence classification. CoRR, abs/1408.5882. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision – ECCV 2014, pages 740–755, Cham. Springer International Publishing. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing, pages 1532– 1543. Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2015. Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 2539–2544. Soujanya Poria, Erik Cambria, Devamanyu Hazarika, and Prateek Vij. 2016. A deeper look into sarcastic tweets using deep convolutional neural networks. CoRR, abs/1610.08815. Tom´aˇs Pt´aˇcek, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on czech and english twitter. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 213–223. Dublin City University and Association for Computational Linguistics. Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In EMNLP 2013 - 2013 2515 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, pages 704–714. Association for Computational Linguistics (ACL). Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252. Rossano Schifanella, Paloma de Juan, Joel R. Tetreault, and Liangliang Cao. 2016. Detecting sarcasm in multimodal social platforms. CoRR, abs/1608.02289. Yi Tay, Luu Anh Tuan, Siu Cheung Hui, and Jian Su. 2018. Reasoning with sarcasm by reading inbetween. CoRR, abs/1805.02856. Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, and Eric P. Xing. 2016. Select-additive learning: Improving cross-individual generalization in multimodal sentiment analysis. CoRR, abs/1609.05244. Peng Wang, Qi Wu, Chunhua Shen, and Anton van den Hengel. 2017. The vqa-machine: Learning how to use existing vision algorithms to answer new questions. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn, volume 4. Chuhan Wu, Fangzhao Wu, Sixing Wu, Junxin Liu, Zhigang Yuan, and Yongfeng Huang. 2018. Thu ngn at semeval-2018 task 3: Tweet irony detection with densely connected lstm and multi-task learning. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 51–56. Qi Wu, Chunhua Shen, Lingqiao Liu, Anthony Dick, and Anton van den Hengel. 2016. What value do explicit high level concepts have in vision to language problems? In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 203–212. Yang Yang, Jia Jia, Shumei Zhang, Boya Wu, Qicong Chen, Juanzi Li, Chunxiao Xing, and Jie Tang. 2014. How do your friends on social media disclose your emotions? In Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 306–312. Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. CoRR, abs/1707.07250. Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Tweet sarcasm detection using deep neural network. In COLING.
2019
239
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 241–251 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 241 Attention Guided Graph Convolutional Networks for Relation Extraction Zhijiang Guo∗, Yan Zhang∗and Wei Lu StatNLP Research Group Singapore University of Technology and Design {zhijiang guo,yan zhang}@mymail.sutd.edu.sg, [email protected] Abstract Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches. 1 Introduction Relation extraction aims to detect relations among entities in the text. It plays a significant role in a variety of natural language processing applications including biomedical knowledge discovery (Quirk and Poon, 2017), knowledge base population (Zhang et al., 2017) and question answering (Yu et al., 2017). Figure 1 shows an example about expressing a relation sensitivity among three entities L858E, EGFR and gefitinib in two sentences. Most existing relation extraction models can be categorized into two classes: sequence-based and dependency-based. Sequence-based models operate only on the word sequences (Zeng et al., ∗Equally Contributed. 2014; Wang et al., 2016), whereas dependencybased models incorporate dependency trees into the models (Bunescu and Mooney, 2005; Peng et al., 2017). Compared to sequence-based models, dependency-based models are able to capture non-local syntactic relations that are obscure from the surface form alone (Zhang et al., 2018). Various pruning strategies are also proposed to distill the dependency information in order to further improve the performance. Xu et al. (2015b,c) apply neural networks only on the shortest dependency path between the entities in the full tree. Miwa and Bansal (2016) reduce the full tree to the subtree below the lowest common ancestor (LCA) of the entities. Zhang et al. (2018) apply graph convolutional networks (GCNs) (Kipf and Welling, 2017) model over a pruned tree. This tree includes tokens that are up to distance K away from the dependency path in the LCA subtree. However, rule-based pruning strategies might eliminate some important information in the full tree. Figure 1 shows an example in cross-sentence n-ary relation extraction that the key tokens partial response would be excluded if the model only takes the pruned tree into consideration. Ideally, the model should be able to learn how to maintain a balance between including and excluding information in the full tree. In this paper, we propose the novel Attention Guided Graph Convolutional Networks (AGGCNs), which operate directly on the full tree. Intuitively, we develop a “soft pruning” strategy that transforms the original dependency tree into a fully connected edgeweighted graph. These weights can be viewed as the strength of relatedness between nodes, which can be learned in an end-to-end fashion by using self-attention mechanism (Vaswani et al., 2017). In order to encode a large fully connected graph, we next introduce dense connections (Huang et al., 2017) to the GCN model following (Guo et al., 242 AUXPASS The deletion mutation on exon-19 of EGFR gene was present in 16 patients, while the L858E point mutation on exon-21 was noted. All patients were treated response. with gefitinib and showed a partial ROOT DET NN PREP_ON PREP_OF NN NSUBJ COP NUM PREP_IN NN NN DET MARK PREP_ON AUXPASS NSUBJPASS ADVCL DET NSUBJPASS PREP_WITH CONJ_AND DOBJ DET AMOD NEXT ROOT Figure 1: An example dependency tree for two sentences expressing a relation (sensitivity) among three entities. The shortest dependency path between these entities is highlighted in bold (edges and tokens). The root node of the LCA subtree of entities is present. The dotted edges indicate tokens K=1 away from the subtree. Note that tokens partial response off these paths (shortest dependency path, LCA subtree, pruned tree when K=1). 2019). For GCNs, L layers will be needed in order to capture neighborhood information that is L hops away. A shallow GCN model may not be able to capture non-local interactions of large graphs. Interestingly, while deeper GCNs can capture richer neighborhood information of a graph, empirically it has been observed that the best performance is achieved with a 2-layer model (Xu et al., 2018). With the help of dense connections, we are able to train the AGGCN model with a large depth, allowing rich local and non-local dependency information to be captured. Experiments show that our model is able to achieve better performance for various tasks. For the cross-sentence relation extraction task, our model surpasses the current state-of-theart models on multi-class ternary and binary relation extraction by 8% and 6% in terms of accuracy respectively. For the largescale sentence-level extraction task (TACRED dataset), our model is also consistently better than others, showing the effectiveness of the model on a large training set. Our code is available at http://www.statnlp.org/ research/information-extraction1 Our contributions are summarized as follows: • We propose the novel AGGCNs that learn a “soft pruning” strategy in an end-to-end fashion, which learns how to select and discard information. Combining with dense connections, our AGGCN model is able to learn a better graph representation. • Our model achieves new state-of-the-art results without additional computational over1Implementation is based on Pytorch (Paszke et al., 2017). head when compared with previous GCNs.2 Unlike tree-structured models (e.g., TreeLSTM (Tai et al., 2015)), it can be efficiently applied over dependency trees in parallel. 2 Attention Guided GCNs In this section, we will present the basic components used for constructing our AGGCN model. 2.1 GCNs GCNs are neural networks that operate directly on graph structures (Kipf and Welling, 2017). Here we mathematically illustrate how multi-layer GCNs work on a graph. Given a graph with n nodes, we can represent the graph with an n × n adjacency matrix A. Marcheggiani and Titov (2017) extend GCNs for encoding dependency trees by incorporating directionality of edges into the model. They add a self-loop for each node in the tree. Opposite direction of a dependency arc is also included, which means Aij = 1 and Aji = 1 if there is an edge going from node i to node j, otherwise Aij = 0 and Aji = 0. The convolution computation for node i at the l-th layer, which takes the input feature representation h(l−1) as input and outputs the induced representation h(l) i , can be defined as: h(l) i = ρ  n X j=1 AijW(l)h(l−1) j + b(l) (1) where W(l) is the weight matrix, b(l) is the bias vector, and ρ is an activation function (e.g., RELU). h(0) i is the initial input xi, where xi ∈Rd and d is the input feature dimension. 2The size of the adjacency matrix representing the fully connected graph is the same as the one of the original tree. 243 0.3 0.1 0.1 0.2 0.6 0.2 0.1 0.1 0.7 0.1 0.2 0.6 0.2 0.2 0.1 0.3 0.3 V3 The winery includes gardens V1 V3 V2 V1 V4 0.1 0.2 0.1 0.6 1 1 0 0 1 0 0 1 1 0 1 1 1 0 1 1 Multi-Head Attention N V1 V2 V3 V4 V1 V2 V4 Attention Guided Layer V3 V2 Ã(1) Ã(N) G(1) G(N) Attention Guided Layer Densely Connected Layer Linear Combination Layer N Densely Connected Layer 0.1 0.3 0.3 0.1 0.3 0.1 0.2 0.3 0.3 0.1 0.2 0.4 0.1 0.3 0.3 0.7 0.3 0.1 0.1 0.1 0.2 0.1 0.2 0.7 0.1 0.1 0.1 0.4 0.2 0.3 0.6 0.3 0.1 0.1 0.3 Densely Connected Layer (number of sub-layers is 3) M V2 V3 V4 V1 V4 Figure 2: The AGGCN model is shown with an example sentence and its dependency tree. It is composed of M identical blocks and each block has three types of layers as shown on the right. Every block takes node embeddings and adjacency matrix that represents the graph as inputs. Then N attention guided adjacency matrices are constructed by using multi-head attention as shown at bottom left. The original dependency tree is transformed into N different fully connected edge-weighted graphs (self-loops are omitted for simplification). Numbers near the edges represent the weights in the matrix. Resulting matrices are fed into N separate densely connected layers, generating new representations. Top left shows an example of the densely connected layer, where the number (L) of sub-layers is 3 (L is a hyper-parameter). Each sub-layer concatenates all preceding outputs as the input. Eventually, a linear combination is applied to combine outputs of N densely connected layers into hidden representations. 2.2 Attention Guided Layer The AGGCN model is composed of M identical blocks as shown in Figure 2. Each block consists of three types of layers: attention guided layer, densely connected layer and linear combination layer. We first introduce the attention guided layer of the AGGCN model. As we discuss in Section 1, most existing pruning strategies are predefined. They prune the full tree into a subtree, based on which the adjacency matrix is constructed. In fact, such strategies can also be viewed as a form of hard attention (Xu et al., 2015a), where edges that connect nodes not on the resulting subtree will be directly assigned zero weights (not attended). Such strategies might eliminate relevant information from the original dependency tree. Instead of using rule-based pruning, we develop a “soft pruning” strategy in the attention guided layer, which assigns weights to all edges. These weights can be learned by the model in an end-to-end fashion. In the attention guided layer, we transform the original dependency tree into a fully connected edge-weighted graph by constructing an attention guided adjacency matrix ˜A. Each ˜A corresponds to a certain fully connected graph and each entry ˜Aij is the weight of the edge going from node i to node j. As shown in Figure 2, ˜A(1) represents a fully connected graph G(1). ˜A can be constructed by using self-attention mechanism (Cheng et al., 2016), which is an attention mechanism (Bahdanau et al., 2015) that captures the interactions between two arbitrary positions of a single sequence. Once we get ˜A, we can use it as the input for the computation of the later graph convolutional layer. Note that the size of ˜A is the same as the original adjacency matrix A (n × n). Therefore, no additional computational overhead is involved. The key idea behind the attention guided layer is to use attention for inducing relations between nodes, especially for those connected by indirect, multi-hop paths. These soft relations can be captured by differentiable functions in the model. Here we compute ˜A by using multi-head attention (Vaswani et al., 2017), which allows the model to jointly attend to information from different representation subspaces. The calculation involves a query and a set of key-value pairs. The output is computed as a weighted sum of the values, where the weight is computed by a function of the query with the corresponding key. ˜A(t) = softmax(QWQ i × (KWK i )T √ d )V (2) where Q and K are both equal to the collective representation h(l−1) at layer l −1 of the AG244 GCN model. The projections are parameter matrices WQ i ∈Rd×d and WK i ∈Rd×d. ˜A(t) is the t-th attention guided adjacency matrix corresponding to the t-th head. Up to N matrices are constructed, where N is a hyper-parameter. Figure 2 shows an example that the original adjacency matrix is transformed into multiple attention guided adjacency matrices. Accordingly, the input dependency tree is converted into multiple fully connected edge-weighted graphs. In practice, we treat the original adjacency matrix as an initialization so that the dependency information can be captured in the node representations for later attention calculation. The attention guided layer is included starting from the second block. 2.3 Densely Connected Layer Unlike previous pruning strategies, which lead to a resulting structure that is smaller than the original structure, our attention guided layer outputs a larger fully connected graph. Following (Guo et al., 2019), we introduce dense connections (Huang et al., 2017) into the AGGCN model in order to capture more structural information on large graphs. With the help of dense connections, we are able to train a deeper model, allowing rich local and non-local information to be captured for learning a better graph representation. Dense connectivity is shown in Figure 2. Direct connections are introduced from any layer to all its preceding layers. Mathematically, we first define g(l) j as the concatenation of the initial node representation and the node representations produced in layers 1, · · · , l −1: g(l) j = [xj; h(1) j ; ...; h(l−1) j ]. (3) In practice, each densely connected layer has L sub-layers. The dimensions of these sub-layers dhidden are decided by L and the input feature dimension d. In AGGCNs, we use dhidden = d/L. For example, if the densely connected layer has 3 sub-layers and the input dimension is 300, the hidden dimension of each sub-layer will be dhidden = d/L = 300/3 = 100. Then we concatenate the output of each sub-layer to form the new representation. Therefore, the output dimension is 300 (3 × 100). Different from the GCN model whose hidden dimension is larger than or equal to the input dimension, the AGGCN model shrinks the hidden dimension as the number of layers increases in order to improves the parameter efficiency similar to DenseNets (Huang et al., 2017). Since we have N different attention guided adjacency matrices, N separate densely connected layers are required. Accordingly, we modify the computation of each layer as follows (for the t-th matrix ˜A(t)): h(l) ti = ρ  n X j=1 ˜A(t) ij W(l) t g(l) j + b(l) t  (4) where t = 1, ..., N and t selects the weight matrix and bias term associated with the attention guided adjacency matrix ˜A(t). The column dimension of the weight matrix increases by dhidden per sub-layer, i.e., W(l) t ∈Rdhidden×d(l), where d(l) = d + dhidden × (l −1). 2.4 Linear Combination Layer The AGGCN model includes a linear combination layer to integrate representations from N different densely connected layers. Formally, the output of the linear combination layer is defined as: hcomb = Wcombhout + bcomb (5) where hout is the output by concatenating outputs from N separate densely connected layers, i.e., hout = [h(1); ...; h(N)] ∈Rd×N. Wcomb ∈ R(d×N)×d is a weight matrix and bcomb is a bias vector for the linear transformation. 2.5 AGGCNs for Relation Extraction After applying the AGGCN model over the dependency tree, we obtain hidden representations of all tokens. Given these representations, the goal of relation extraction is to predict a relation among entities. Following (Zhang et al., 2018), we concatenate the sentence representation and entity representations to get the final representation for classification. First we need to obtain the sentence representation hsent. It can be computed as: hsent = f(hmask) = f(AGGCN(x)) (6) where hmask represents the masked collective hidden representations. Masked here means we only select representations of tokens that are not entity tokens in the sentence. f : Rd×n →Rd×1 is a max pooling function that maps from n output vectors to 1 sentence vector. Similarly, we can obtain the entity representations. For the i-th entity, its representation hei can be computed as: hei = f(hei) (7) 245 where hei indicates the hidden representation corresponding to the i-th entity.3 Entity representations will be concatenated with sentence representation to form a new representation. Following (Zhang et al., 2018), we apply a feed-forward neural network (FFNN) over the concatenated representations inspired by relational reasoning works (Santoro et al., 2017; Lee et al., 2017): hfinal = FFNN([hsent; he1; ...hei]) (8) where hfinal will be taken as inputs to a logistic regression classifier to make a prediction. 3 Experiments 3.1 Data We evaluate the performance of our model on two tasks, namely, cross-sentence n-ary relation extraction and sentence-level relation extraction. For the cross-sentence n-ary relation extraction task, we use the dataset introduced in (Peng et al., 2017), which contains 6,987 ternary relation instances and 6,087 binary relation instances extracted from PubMed.4 Most instances contain multiple sentences and each instance is assigned with one of the five labels, including “resistance or nonresponse”, “sensitivity”, “response”, “resistance” and “none”. We consider two specific tasks for evaluation, i,e., binary-class n-ary relation extraction and multi-class n-ary relation extraction. For binary-class n-ary relation extraction, we follow (Peng et al., 2017) to binarize multi-class labels by grouping the four relation classes as “yes” and treating “none” as “no”. For the sentence-level relation extraction task, we follow the experimental settings in (Zhang et al., 2018) to evaluate our model on the TACRED dataset (Zhang et al., 2017) and Semeval-10 Task 8 (Hendrickx et al., 2010). With over 106K instances, the TACRED dataset introduces 41 relation types and a special “no relation” type to describe the relations between the mention pairs in instances. Subject mentions are categorized into “person” and “organization”, while object mentions are categorized into 16 fine-grained types, including “date”, “location”, etc. Semeval-10 Task 8 is a public dataset, which contains 10,717 instances with 9 relations and a special “other” class. 3The number of entities is fixed in n-ary relation extraction task. It is 3 for the first dataset and 2 for the second. 4The dataset is available at https://github.com/ freesunshine0316/nary-grn 3.2 Setup We tune the hyper-parameters according to results on the development sets. For the cross-sentence nary relation extraction task, we use the same data split used in (Song et al., 2018b)4, while for the sentence-level relation extraction task, we use the same development set from (Zhang et al., 2018)5. We choose the number of heads N for attention guided layer from {1, 2, 3, 4}, the block number M from {1, 2, 3}, the number of sublayers L in each densely connected layer from {2, 3, 4, 5, 6}. Through preliminary experiments on the development sets, we find that the combinations (N=2, M=2, L=5, dhidden=340) and (N=3, M=2, L=5, dhidden=300) give the best results on cross-sentence n-ary relation extraction and sentence-level relation extraction, respectively. GloVe (Pennington et al., 2014)6 vectors are used as the initialization for word embeddings. Models are evaluated using the same metrics as previous work (Song et al., 2018b; Zhang et al., 2018). We report the test accuracy averaged over five cross validation folds (Song et al., 2018b) for the cross-sentence n-ary relation extraction task. For the sentence-level relation extraction task, we report the micro-averaged F1 scores for the TACRED dataset and the macro-averaged F1 scores for the SemEval dataset (Zhang et al., 2018). 3.3 Results on Cross-Sentence n-ary Relation Extraction For cross-sentence n-ary relation extraction task, we consider three kinds of models as baselines: 1) a feature-based classifier (Quirk and Poon, 2017) based on shortest dependency paths between all entity pairs, 2) Graph-structured LSTM methods, including Graph LSTM (Peng et al., 2017), bidirectional DAG LSTM (Bidir DAG LSTM) (Song et al., 2018b) and Graph State LSTM (GS GLSTM) (Song et al., 2018b). These methods extend LSTM to encode graphs constructed from input sentences with dependency edges, 3) Graph convolutional networks (GCN) with pruned trees, which have shown efficacy on the relation extraction task (Zhang et al., 2018)7. Addition5https://nlp.stanford.edu/projects/ tacred/ 6We use the 300-dimensional Glove word vectors trained on the Common Crawl corpus https://nlp. stanford.edu/projects/glove/ 7The results are produced by the open implementation of Zhang et al. (2018). 246 Model Binary-class Multi-class T B T B Single Cross Single Cross Cross Cross Feature-Based (Quirk and Poon, 2017) 74.7 77.7 73.9 75.2 SPTree (Miwa and Bansal, 2016) 75.9 75.9 Graph LSTM-EMBED (Peng et al., 2017) 76.5 80.6 74.3 76.5 Graph LSTM-FULL (Peng et al., 2017) 77.9 80.7 75.6 76.7 00000000000000000 + multi-task 82.0 78.5 Bidir DAG LSTM (Song et al., 2018b) 75.6 77.3 76.9 76.4 51.7 50.7 GS GLSTM (Song et al., 2018b) 80.3 83.2 83.5 83.6 71.7 71.7 GCN (Full Tree) (Zhang et al., 2018) 84.3 84.8 84.2 83.6 77.5 74.3 GCN (K=0) (Zhang et al., 2018) 85.8 85.8 82.8 82.7 75.6 72.3 GCN (K=1) (Zhang et al., 2018) 85.4 85.7 83.5 83.4 78.1 73.6 GCN (K=2) (Zhang et al., 2018) 84.7 85.0 83.8 83.7 77.9 73.1 AGGCN (ours) 87.1 87.0 85.2 85.6 79.7 77.4 Table 1: Average test accuracies in five-fold validation for binary-class n-ary relation extraction and multi-class n-ary relation extraction. “T” and “B” denote ternary drug-gene-mutation interactions and binary drug-mutation interactions, respectively. Single means that we report the accuracy on instances within single sentences, while Cross means the accuracy on all instances. K in the GCN models means that the preprocessed pruned trees include tokens up to distance K away from the dependency path in the LCA subtree. ally, we follow (Song et al., 2018b) to consider the tree-structured LSTM method (SPTree) (Miwa and Bansal, 2016) on drug-mutation binary relation extraction. Main results are shown in Table 1. We first focus on the binary-class n-ary relation extraction task. For ternary relation extraction (first two columns in Table 1 ), our AGGCN model achieves accuracies of 87.1 and 87.0 on instances within single sentence (Single) and on all instances (Cross), respectively, which outperform all the baselines. More specifically, our AGGCN model surpasses the state-of-the-art Graphstructured LSTM model (GS GLSTM) by 6.8 and 3.8 points for the Single and Cross settings, respectively. Compared to GCN models , our model obtains 1.3 and 1.2 points higher than the best performing model with pruned tree (K=1). For binary relation extraction (third and fourth columns in Table 1), AGGCN consistently outperforms GS GLSTM and GCN as well. These results suggest that, compared to previous full tree based methods, e.g., GS GLSTM, AGGCN is able to extract more information from the underlying graph structure to learn a more expressive representation through graph convolutions. AGGCN also performs better than GCNs, although its performance can be boosted via pruned trees. We believe this is because of the combination of densely connected layer and attention guided layer. The dense connections could facilitate information propagation in large graphs, enabling AGGCN to efficiently learn from long-distance dependencies without pruning techniques. Meanwhile, the attention guided layer can further distill relevant information and filter out noises from the representation learned by the densely connected layer. We next show the results on the multi-class classification task (last two columns in Table 1). We follow (Song et al., 2018b) to evaluate our model on all instances for both ternary and binary relations. This fine-grained classification task is much harder than coarse-grained classification task. As a result, the performance of all models degrades a lot. However, our AGGCN model still obtains 8.0 and 5.7 points higher than the GS GLSTM model for ternary and binary relations, respectively. We also notice that our AGGCN achieves a better test accuracy than all GCN models, which further demonstrates its ability to learn better representations from full trees. 3.4 Results on Sentence-level Relation Extraction We now report the results on the TACRED dataset for the sentence-level relation extraction task in Table 2. We compare our model against two kinds of models: 1) dependency-based models, 2) 247 Model P R F1 LR (Zhang et al., 2017) 73.5 49.9 59.4 SDP-LSTM (Xu et al., 2015c)* 66.3 52.7 58.7 Tree-LSTM (Tai et al., 2015)** 66.0 59.2 62.4 PA-LSTM (Zhang et al., 2017) 65.7 64.5 65.1 GCN (Zhang et al., 2018) 69.8 59.0 64.0 C-GCN (Zhang et al., 2018) 69.9 63.3 66.4 AGGCN (ours) 69.9 60.9 65.1 C-AGGCN (ours) 71.8 66.4 69.0 Table 2: Results on the TACRED dataset. Model with * indicates that the results are reported in Zhang et al. (2017), while model with ** indicates the results are reported in Zhang et al. (2018). Model F1 SVM (Rink and Harabagiu, 2010) 82.2 SDP-LSTM (Xu et al., 2015c) 83.7 SPTree (Miwa and Bansal, 2016) 84.4 PA-LSTM (Zhang et al., 2017) 82.7 C-GCN (Zhang et al., 2018) 84.8 C-AGGCN (ours) 85.7 Table 3: Results on the SemEval dataset. sequence-based models. Dependency-based models include the logistic regression classifier (LR) (Zhang et al., 2017), Shortest Path LSTM (SDPLSTM) (Xu et al., 2015c), Tree-structured neural model (Tree-LSTM) (Tai et al., 2015), GCN and Contextualized GCN (C-GCN) (Zhang et al., 2018). Both GCN and C-GCN models use the pruned trees. For sequence-based model, we consider the state-of-the-art Position Aware LSTM (PA-LSTM) (Zhang et al., 2017). As shown in Table 2, the logistic regression classifier (LR) obtains the highest precision score. We hypothesize that the reason behind this is due to the data imbalance issue. This feature-based method tends to predict a highly frequent label as the relation (e.g., “per:title”). Therefore, it has a high precision while having a relatively low recall. On the other hand, the neural models are able to better balance the precision and recall scores. Since GCN and C-GCN already show their superiority over other dependency-based models and PA-LSTM, we mainly compare our AGGCN model with them. We can observe that AGGCN outperforms GCN by 1.1 F1 points. We speculate Model F1 C-AGGCN 69.0 0 – Attention-guided layer (AG) 67.1 0 – Dense connected layer (DC) 67.3 0 – AG, DC 66.7 0 – Feed-Forward layer (FF) 67.8 Table 4: An ablation study for C-AGGCN model. Model F1 C-AGGCN (Full tree) 69.0 C-AGGCN (K=2) 67.5 C-AGGCN (K=1) 67.9 C-AGGCN (K=0) 67.0 Table 5: Results of C-AGGCN with pruned trees. that the limited improvement is due to the lack of contextual information about word order or disambiguation. Similar to C-GCN (Zhang et al., 2018), we extend our AGGCN model with a bidirectional LSTM network to capture the contextual representations which are subsequently fed into AGGCN layers. We term the modified model as C-AGGCN. Our C-AGGCN model achieves an F1 score of 69.0, which outperforms the state-ofart C-GCN model by 2.6 points. We also notice that AGGCN and C-AGGCN achieve better precision and recall scores than GCN and C-GCN, respectively. The performance gap between GCNs with pruned trees and AGGCNs with full trees empirically show that the AGGCN model is better at distinguishing relevant from irrelevant information for learning a better graph representation. We also evaluate our model on the SemEval dataset under the same settings as (Zhang et al., 2018). Results are shown in Table 3. This dataset is much smaller than TACRED (only 1/10 of TACRED in terms of the number of instances). Our C-AGGCN model (85.7) consistently outperforms the C-GCN model (84.8), showing the good generalizability. 3.5 Analysis and Discussion Ablation Study. We examine the contributions of two main components, namely, densely connected layers and attention guided layers, using the best-performing C-AGGCN model on the TACRED dataset. Table 4 shows the results. We can observe that adding either attention guided layers 248 20 40 60 80 100 percentage of training dataset (%) 58 60 62 64 66 68 70 F1 58.5 58.9 61.5 62.7 64 65.6 64.2 66.5 66.4 69 C-GCN C-AGGCN Figure 3: Comparison of C-AGGCN and C-GCN against different training data sizes. The results of C-GCN are reproduced from (Zhang et al., 2018). <20 20-30 30-40 40-50 >=50 Sentence length 64 65 66 67 68 69 70 F1 C-AGGCN (Full Tree) C-AGGCN (K=1) C-GCN (K=1) Figure 4: Comparison of C-AGGCN and C-GCN against different sentence lengths. The results of CGCN are reproduced from (Zhang et al., 2018). or densely connected layers improves the performance of the model. This suggests that both layers can assist GCNs to learn better information aggregations, producing better representations for graphs, where the attention-guided layer seems to be playing a more significant role. We also notice that the feed-forward layer is effective in our model. Without the feed-forward layer, the result drops to an F1 score of 67.8. Performance with Pruned Trees. Table 5 shows the performance of the C-AGGCN model with pruned trees, where K means that the pruned trees include tokens that are up to distance K away from the dependency path in the LCA subtree. We can observe that all the C-AGGCN models with varied values of K are able to outperform the state-of-the-art C-GCN model (Zhang et al., 2018) (reported in Table 2). Specifically, with the same setting as K=1, C-AGGCN surpasses C-GCN by 1.5 points of F1 score. This demonstrates that, with the combination of densely connected layer and attention guided layer, C-AGGCN can learn better representations of graphs than C-GCN for downstream tasks. In addition, we notice that the performance of C-AGGCN with full trees outperforms all C-AGGCNs with pruned trees. These results further show the superiority of “soft pruning” strategy over hard pruning strategy in utilizing full tree information. Performance against Sentence Length. Figure 4 shows the F1 scores of three models under different sentence lengths. We partition the sentence length into five classes (< 20, [20, 30), [30, 40), [40, 50), ≥50). In general, C-AGGCN with full trees outperforms C-AGGCN with pruned trees and C-GCN against various sentence lengths. We also notice that C-AGGCN with pruned trees performs better than C-GCN in most cases. Moreover, the improvement achieved by C-AGGCN with pruned trees decays when the sentence length increases. Such a performance degradation can be avoided by using full trees, which provide more information of the underlying graph structures. Intuitively, with the increase of the sentence length, the dependency graph becomes larger as more nodes are included. This suggests that C-AGGCN can benefit more from larger graphs (full tree). Performance against Training Data Size. Figure 3 shows the performance of C-AGGCN and C-GCN against different settings for training with different amount of training data. We consider five training settings (20%, 40%, 60%, 80%, 100% of the training data). C-AGGCN consistently outper249 forms C-GCN under the same amount of training data. When the size of training data increases, we can observe that the performance gap becomes more obvious. Specifically, using 80% of the training data, the C-AGGCN model is able to achieve a F1 score of 66.5, higher than C-GCN trained on the complete training set. These results demonstrate that our model is more effective in terms of using training resources. 4 Related Work Our work builds on a rich line of recent efforts on relation extraction models and graph convolutional networks. Relation Extraction. Early research efforts are based on statistical methods. Tree-based kernels (Zelenko et al., 2002) and dependency path-based kernels (Bunescu and Mooney, 2005) are explored to extract the relation. McDonald et al. (2005) construct maximal cliques of entities to predict relations. Mintz et al. (2009) include syntactic features to a statistical classifier. Recently, sequencebased models leverages different neural networks to extract relations, including convolutional neural networks (Zeng et al., 2014; Nguyen and Grishman, 2015; Wang et al., 2016), recurrent neural networks (Zhou et al., 2016; Zhang et al., 2017) the combination of both (Vu et al., 2016) and transformer (Verga et al., 2018). Dependency-based approaches also try to incorporate structural information into the neural models. Peng et al. (2017) first split the dependency graph into two DAGs, then extend the tree LSTM model (Tai et al., 2015) over these two graphs for n-ary relation extraction. Closest to our work, Song et al. (2018b) use graph recurrent networks (Song et al., 2018a) to directly encode the whole dependency graph without breaking it. The contrast between our model and theirs is reminiscent of the contrast between CNN and RNN. Various pruning strategies have also been proposed to distill the dependency information in order to further improve the performance. Xu et al. (2015b,c) adapt neural models to encode the shortest dependency path. Miwa and Bansal (2016) apply LSTM model over the LCA subtree of two entities. Liu et al. (2015) combine the shortest dependency path and the dependency subtree. Zhang et al. (2018) adopt a path-centric pruning strategy. Unlike these strategies that remove edges in preprocessing, our model learns to assign each edge a different weight in an end-to-end fashion. Graph Convolutional Networks. Early efforts that attempt to extend neural networks to deal with arbitrary structured graphs are introduced by Gori et al. (2005); Bruna (2014). Subsequent efforts improve its computational efficiency with local spectral convolution techniques (Henaff et al., 2015; Defferrard et al., 2016). Our approach is closely related to the GCNs (Kipf and Welling, 2017), which restrict the filters to operate on a first-order neighborhood around each node. More recently, Velickovic et al. (2018) proposed graph attention networks (GATs) to summarize neighborhood states by using masked selfattentional layers (Vaswani et al., 2017). Compared to our work, their motivations and network structures are different. In particular, each node only attends to its neighbors in GATs whereas AGGCNs measure the relatedness among all nodes. The network topology in GATs remains the same, while fully connected graphs will be built in AGGCNs to capture long-range semantic interactions. 5 Conclusion We introduce the novel Attention Guided Graph Convolutional Networks (AGGCNs). Experimental results show that AGGCNs achieve state-ofthe-art results on various relation extraction tasks. Unlike previous approaches, AGGCNs operate directly on the full tree and learn to distill the useful information from it in an end-to-end fashion. There are multiple venues for future work. One natural question we would like to ask is how to make use of the proposed framework to perform improved graph representation learning for graph related tasks (Bastings et al., 2017). Acknowledgements We would like to thank the anonymous reviewers for their valuable and constructive comments on this work. We would also like to thank Zhiyang Teng, Linfeng Song, Yuhao Zhang and Chenxi Liu for their helpful suggestions. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156. This work is also partially supported by SUTD project PIE-SGP-AI-201801. 250 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proc. of EMNLP. Joan Bruna. 2014. Spectral networks and deep locally connected networks on graphs. In Proc. of ICLR. Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proc. of EMNLP. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proc. of EMNLP. Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Proc. of NeurIPS. Michele Gori, Gabriele Monfardini, and Franco Scarselli. 2005. A new model for learning in graph domains. In Proc. of IJCNN. Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu. 2019. Densely connected graph convolutional networks for graph-to-sequence learning. Transactions of the Association of Computational Linguistics. Mikael Henaff, Joan Bruna, and Yann LeCun. 2015. Deep convolutional networks on graph-structured data. arXiv preprint. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In SemEval@ACL. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proc. of CVPR. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proc. of ICLR. Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proc. of EMNLP. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In Proc. of ACL. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proc. of EMNLP. Ryan T. McDonald, Fernando Pereira, Seth Kulick, R. Scott Winters, Yang Jin, and Peter S. White. 2005. Simple algorithms for complex relation extraction with applications to biomedical ie. In Proc. of ACL. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proc. of ACL. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proc. of ACL. Thien Huu Nguyen and Ralph Grishman. 2015. Relation extraction: Perspective from convolutional neural networks. In Proc. of VS@NAACL-HLT. Adam Paszke, Sam Gross, and Adam Lerer. 2017. Automatic differentiation in pytorch. In Proc. of workshop on NeurIPS. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics, 5:101–115. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP. Chris Quirk and Hoifung Poon. 2017. Distant supervision for relation extraction beyond the sentence boundary. In Proc. of EACL. Bryan Rink and Sanda M. Harabagiu. 2010. Utd: Classifying semantic relations by combining lexical and semantic resources. In SemEval@ACL. Adam Santoro, David Raposo, David G. T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter W. Battaglia, and Timothy P. Lillicrap. 2017. A simple neural network module for relational reasoning. In Proc. of NeurIPS. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018a. A graph-to-sequence model for amrto-text generation. In Proc. of ACL. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. N-ary relation extraction using graph state lstm. In Proc. of EMNLP. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proc. of ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. In Proc. of ICLR. 251 Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proc. of NAACL-HLT. Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hinrich Sch¨utze. 2016. Combining recurrent and convolutional neural networks for relation classification. In Proc. of NAACL-HLT. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proc. of ACL. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015a. Show, attend and tell: Neural image caption generation with visual attention. In Proc. of ICML. Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken ichi Kawarabayashi, and Stefanie Jegelka. 2018. Representation learning on graphs with jumping knowledge networks. In Proc. of ICML. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015b. Semantic relation classification via convolutional neural networks with simple negative sampling. In Proc. of EMNLP. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015c. Classifying relations via long short term memory networks along shortest dependency paths. In Proc. of EMNLP. Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, C´ıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowledge base question answering. In Proc. of ACL. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In Proc. of EMNLP. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jian Zhao. 2014. Relation classification via convolutional deep neural network. In Proc. of COLING. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proc. of EMNLP. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proc. of EMNLP. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In Proc. of ACL.
2019
24
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2516–2526 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2516 Topic-Aware Neural Keyphrase Generation for Social Media Language Yue Wang1∗Jing Li2† Hou Pong Chan1 Irwin King1 Michael R. Lyu1 Shuming Shi2 1Department of Computer Science and Engineering The Chinese University of Hong Kong, HKSAR, China 2Tencent AI Lab, Shenzhen, China 1{yuewang, hpchan, king,lyu}@cse.cuhk.edu.hk 2{ameliajli,shumingshi}@tencent.com Abstract A huge volume of user-generated content is daily produced on social media. To facilitate automatic language understanding, we study keyphrase prediction, distilling salient information from massive posts. While most existing methods extract words from source posts to form keyphrases, we propose a sequence-to-sequence (seq2seq) based neural keyphrase generation framework, enabling absent keyphrases to be created. Moreover, our model, being topic-aware, allows joint modeling of corpus-level latent topic representations, which helps alleviate the data sparsity that widely exhibited in social media language. Experiments on three datasets collected from English and Chinese social media platforms show that our model significantly outperforms both extraction and generation models that do not exploit latent topics.1 Further discussions show that our model learns meaningful topics, which interprets its superiority in social media keyphrase generation. 1 Introduction As social media continues its worldwide expansion, the last decade has witnessed the revolution of interpersonal communication. While empowering individuals with richer and fresher information, the flourish of social media also results in millions of posts generated on a daily basis. Facing a sheer quantity of texts, language understanding has become a daunting task for human beings. Under this circumstance, there exists a pressing need for developing automatic systems capable of absorbing massive social media texts and figuring out what is important. *This work was partially done when Yue Wang was an intern at Tencent AI Lab. †Jing Li is the corresponding author. 1Our data and code are publicly released in https:// github.com/yuewang-cuhk/TAKG Source post with keyphrase “super bowl”: [S]: Somewhere, a wife that is not paying attention to the game, says ”I want the team in yellow pants to win.” Relevant tweets: [T1]: I been a steelers fan way before black & yellow and this super bowl! [T2]: I will bet you the team with yellow pants wins. [T3]: Wiz Khalifa song ’black and yellow” to spur the pittsburgh steelers and Lil Wayne is to sing ”green and yellow’ for the packers. Table 1: Sample tweets tagged with “super bowl” as their keyphrases. Blue and italic words can indicate the topic of super bowl. In this work, we study the prediction of keyphrases, generally formed with words or phrases reflecting main topics conveyed in input texts (Zhang et al., 2018). Particularly, we focus on producing keyphrases for social media language, proven to be beneficial to a broad range of applications, such as instant detection of trending events (Weng and Lee, 2011), summarizing public opinions (Meng et al., 2012), analyzing social behavior (Ruths and Pfeffer, 2014), and so forth. In spite of the substantial efforts made in social media keyphrase identification, most progress to date has focused on extracting words or phrases from source posts, thus failing to yield keyphrases containing absent words (i.e., words do not appear in the post). Such cases are indeed prominent on social media, mostly attributed to the informal writing styles of users therein. For example, Table 1 shows a tweet S tagged with keyphrase “super bowl” by its author, though neither “super” nor “bowl” appears in it.2 In our work, distinguishing from previous studies, we approach social media keyphrase prediction with a sequence 2Following common practice (Zhang et al., 2016, 2018), we consider author-annotated hashtags as tweets’ keyphrases. 2517 generation framework, which is able to create absent keyphrases beyond source posts. Our work is built on the success of deep keyphrase generation models based on neural sequence-to-sequence (seq2seq) framework (Meng et al., 2017). However, existing models, though effective on well-edited documents (e.g., scientific articles), will inevitably encounter the data sparsity issue when adapted to social media. It is essentially due to the informal and colloquial nature of social media language, which results in limited features available in the noisy data. For instance, only given the words in S (Table 1), it is difficult to figure out why “super bowl” is its keyphrase. However, by looking at tweets T1 to T3, we can see “yellow pants” is relevant to “steelers”, a super bowl team. As “yellow” and “pants” widely appear in tweets tagged with “super bowl’, it becomes possible to identify “super bowl” as S’s keyphrase. Here we propose a novel topic-aware neural keyphrase generation model that leverages latent topics to enrich useful features. Our model is able to identify topic words, naturally indicative of keyphrases, via exploring post-level word cooccurrence patterns, such as “yellow” and “pants” in S. Previous work have shown that corpus-level latent topics can effectively alleviate data sparsity in other tasks (Zeng et al., 2018; Li et al., 2018). The effects of latent topics, nevertheless, have never been explored in existing keyphrase generation research, particularly in the social media domain. To the best of our knowledge, our work is the first to study the benefit of leveraging latent topics on social media keyphrase generation. Also, our model, taking advantage of the recent advance of neural topic models (Miao et al., 2017), enables end-to-end training of latent topic modeling and keyphrase generation. We experiment on three newly constructed social media datasets. Two are from English platform Twitter and StackExchange, and the other from Chinese microblog Weibo. The comparison results over both extraction and generation methods show that our model can better produce keyphrases, significantly outperforming all the comparison models without exploiting latent topics. For example, on Weibo dataset, our model achieves 34.99% F1@1 compared with 32.01% yielded by a state-of-the-art keyphrase generation model (Meng et al., 2017). We also probe into our outputs and find that meaningful latent topics can be learned, which can usefully indicate keyphrases. At last, a preliminary study on scientific articles shows that latent topics work better on text genres with informal language style. 2 Related Work Our work is mainly in the line of two areas: keyphrase prediction and topic modeling. We introduce them in turn below. Keyphrase Prediction. Most previous efforts on this task adopt supervised or unsupervised approaches based on extraction — words or phrases selected from source documents to form keyphrases. Supervised methods are mostly based on sequence tagging (Zhang et al., 2016; Gollapalli et al., 2017) or binary classification using various features (Witten et al., 1999; Medelyan et al., 2009). For unsupervised methods, they are built on diverse algorithms, including graph ranking (Mihalcea and Tarau, 2004; Wan and Xiao, 2008), document clustering (Liu et al., 2009, 2010), and statistical models like TF-IDF (Salton and McGill, 1986). Our work is especially in the line of social media keyphrase prediction, where extractive approaches are widely employed (Zhang et al., 2016, 2018). On the contrary, we predict keyphrases in a sequence generation manner, allowing the creation of absent keyphrases. Our work is inspired by seq2seq-based keyphrase generation models (Meng et al., 2017; Chen et al., 2018, 2019a,b), which are originally designed for scientific articles. However, their performance will be inevitably compromised when directly applied to social media language owing to the data sparsity problem. Recently, Wang et al. (2019) propose a microblog hashtag generation framework, which explicitly enriches context with user responses. Different from them, we propose to leverage corpus-level latent topic representations, which can be learned without requiring external data. Its potential usefulness on keyphrase generation has been ignored in previous research and will be extensively studied here. Topic Modeling. Our work is closely related with topic models that discover latent topics from word co-occurrence in document level. They are commonly in the fashion of latent Dirichlet allocation (LDA) based on Bayesian graphical models 2518 Neural Topic Model ! " σ GRU GRU GRU … GRU $% &' &( &) *+,*./0 1% 2 *+,3 Topic-aware Decoder Sequence Encoder … … 45 46 47 48 4/ Figure 1: Our topic-aware neural keyphrase generation framework (§3). (Blei et al., 2003). These models, however, rely on the expertise involvement to customize model inference algorithms. Our framework exploits the recently proposed neural topic models (Miao et al., 2017; Srivastava and Sutton, 2017) to infer latent topics, which facilitate end-to-end training with other neural models and do not require modelspecific derivation. It has proven useful for citation recommendation (Bai et al., 2018) and conversation understanding (Zeng et al., 2019). In particular, Zeng et al. (2018) propose to jointly train topic models and short text classification, which cannot fit our scenario due to the large diversity of the keyphrases (Wang et al., 2019). Different from them, our latent topics are learned together with language generation, whose effects on keyphrase generation have never been explored before in existing work. 3 Topic-Aware Neural Keyphrase Generation Model In this section, we describe our framework that leverages latent topics in neural keyphrase generation. Figure 1 shows our overall architecture consisting of two modules — a neural topic model for exploring latent topics (§3.1) and a seq2seq-based model for keyphrase generation (§3.2). Formally, given a collection C with |C| social media posts {x1, x2, ..., x|C|} as input, we process each post x into bag-of-words (BoW) term vector xbow and word index sequence vector xseq. xbow is a V -dim vector over the vocabulary (V being the vocabulary size). It is fed into the neural topic model following the BoW assumption (Miao et al., 2017). xseq serves as the input for the seq2seqbased keyphrase generation model. Below we first introduce our two modules and then describe how they are jointly trained (§3.3). 3.1 Neural Topic Model Our neural topic model (NTM) module is inspired by Miao et al. (2017) based on variational autoencoder (Kingma and Welling, 2013), which consists of an encoder and a decoder to resemble the data reconstruction process. Specifically, the input xbow is first encoded into a continuous latent variable z (representing x’s topic) by a BoW encoder. Then the BoW decoder, conditioned on z, attempts to reconstruct x and outputs a BoW vector x′bow. Particularly, the decoder simulates topic model’s generation process. We then describe their division of labor. BoW Encoder. The BoW encoder is responsible for estimating prior variables µ and σ, which will be used to induce intermediate topic representation z. We adopt the following formula: µ = fµ(fe(xbow)), log σ = fσ(fe(xbow)), (1) where f∗(·) is a neural perceptron with an ReLUactivated function following Zeng et al. (2018). BoW Decoder. Analogous to LDA-style topic models, it is assumed that there are K topics underlying the given corpus C. Each topic k is represented with a topic-word distribution φk over the vocabulary, and each post x ∈C has a topic mixture denoted by θ, a K-dim distributional vector. Specifically in neural topic model, θ is constructed by Gaussian softmax (Miao et al., 2017). The decoder hence takes the following steps to simulate how each post x is generated: • Draw latent topic variable z ∼N(µ, σ2) • Topic mixture θ = softmax(fθ(z)) • For each word w ∈x – Draw w ∼softmax(fφ(θ)) Here f∗(·) is also a ReLU-activated neural perceptron for inputs. In particular, we employ the weight matrix of fφ(·) as the topic-word distributions (φ1, φ2, ..., φK). In the following, we adopt the topic mixture θ as the topic representations to guide keyphrase generation. 3.2 Neural Keyphrase Generation Model Here we describe how we generate keyphrases with a topic-aware seq2seq model, which incorporates latent topics (learned by NTM) in its generation process. Below comes more details. 2519 Overview. The keyphrase generation module (KG model) is fed with source post x in its word sequence form xseq = ⟨w1, w2, ..., w|x|⟩(|x| is the number of words in x). Its target is to output a word sequence y as x’s keyphrase. Particularly, for a source post with multiple gold-standard keyphrases, we follow the practice in Meng et al. (2017) to pair its copies with each of the gold standards to form a training instance. To generate keyphrases for source posts, the KG model employs a seq2seq model. The sequence encoder distills indicative features from an input source post. The decoder then generates its keyphrase, conditioned on the encoded features and the latent topics yielded by NTM (henceforth topic-aware sequence decoder). Sequence Encoder. We employ a bidirectional gated recurrent unit (Bi-GRU) (Cho et al., 2014) to encode the input source sequence. Each word wi ∈xseq (i = 1, 2, ..., |x|) is first embedded into an embedding vector νi, and then mapped into forward and backward hidden states (denoted as −→ hi and ←− hi) with the following defined operations: −→ hi = fGRU(νi, hi−1), (2) ←− hi = fGRU(νi, hi+1). (3) The concatenation of −→ hi and ←− hi, [−→ hi; ←− hi], serves as wi’s hidden state in encoder, denoted as hi. Finally, we construct a memory bank: M = ⟨h1, h2, ..., h|x|⟩, for decoder’s attentive retrieval. Topic-Aware Sequence Decoder. In general, conditioned on the memory bank M and latent topic θ from NTM, we define the process to generate its keyphrase y with the following probability: Pr(y | x) = |y| Y j=1 Pr(yj | y<j, M, θ), (4) where y<j = ⟨y1, y2, ..., yj−1⟩. And Pr(yj|y<j, M, θ), denoted as pj, is a word distribution over vocabulary, reflecting how likely a word to fill in the j-th slot in target keyphrase. Below we describe the procedure to obtain pj. Our sequence decoder employs a unidirectional GRU layer. Apart from the general state update, the j-th hidden state sj is further designed to take input x’s topic mixture θ into consideration: sj = fGRU([uj; θ], sj−1), (5) where uj is the j-th embedded decoder input3 and sj−1 is the previous hidden state. Here [; ] denotes the concatenation operation. The decoder also looks at M (learned by sequence encoder) and puts an attention on it to capture important information. When predicting the j-th word in keyphrase, the attention weights on wi ∈xseq is defined as: αij = exp(fα(hi, sj, θ)) P|x| i′=1 exp(fα(hi′, sj, θ)) , (6) where fα(hi, sj, θ) = vT α tanh(Wα[hi; sj; θ] + bα). (7) Here vα, Wα, and bα are trainable parameters. fα(·) measures the semantic relations between the i-th word in the source and the j-th target word to be predicted. Such relations are also calibrated with the input’s latent topic θ in order to explore and highlight topic words. We hence obtain the topic sensitive context vector cj with: cj = |x| X i=1 αijhi. (8) Further, conditioned on cj, we generate the j-th word over the global vocabulary according to: pgen = softmax(Wgen[sj; cj] + bgen). (9) In addition, we adopt copy mechanism (See et al., 2017) following Meng et al. (2017), which allows keywords to be directly extracted from the source input. Specifically, we adopt a soft switcher λj ∈[0, 1] to determine whether to copy a word from source as the j-th target word: λj = sigmoid(Wλ[uj; sj; cj; θ] + bλ), (10) with Wλ and bλ being learnable parameters. Topic information θ is also injected here to guide the switch decision. Finally, we obtain distribution pj for predicting the j-th target word with the formula below: pj = λj · pgen + (1 −λj) · |x| X i=1 αij, (11) where attention scores {αij}|x| i=1 serve as the extractive distribution over the source input. 3We take the previous word from gold standards in training by teacher forcing and from the predicted word in test. 2520 3.3 Jointly Learning Topics and Keyphrases Our neural framework allows end-to-end learning of latent topic modeling and keyphrase generation. We first define objective functions for the two modules respectively. For NTM, the objective function is defined based on negative variational lower bound (Blei et al., 2016). Here due to space limitation, we omit the derivation details already described in Miao et al. (2017), and directly give its loss function: LNTM = DKL(p(z) || q(z | x))−Eq(z | x)[p(x | z)], (12) where the first term is the Kullback-Leibler divergence loss and the second term reflects the reconstruction loss. p(z) denotes a standard normal prior. q(z | x) and p(x | z) represent the process of BoW encoder and BoW decoder respectively. For KG model, we minimize the cross entropy loss over all training instances: LKG = − N X n=1 log(Pr(yn | xn, θn)), (13) where N denotes the number of training instances and θn is xn’s latent topics induced from NTM. Finally, we define the entire framework’s training objective with the linear combination of LNTM and LKG: L = LNTM + γ · LKG, (14) where the hyper-parameter γ balances the effects of NTM and KG model. Our two modules can be jointly trained with their parameters updated simultaneously. For inference, we adopt beam search and generate a ranking list of output keyphrases following Meng et al. (2017). 4 Experiment Setup Datasets. We conduct experiments on three social media datasets collected from two English online platforms, Twitter and StackExchange, and a Chinese microblog website, Weibo. Twitter and Weibo are microblogs encouraging users to freely post with a wide range of topics, while StackExchange, an online Q&A forum, are mainly for question asking (with a title and a description) and seeking answers from others. The Twitter dataset contains tweets from TREC 2011 microblog track.4 For Weibo dataset, we first 4http://trec.nist.gov/data/tweets/ Source posts # of Avg len # of KP Source posts per post per post vocab Twitter 44,113 19.52 1.13 34,010 Weibo 46,296 33.07 1.06 98,310 StackExchange 49,447 87.94 2.43 99,775 Target KP |KP| Avg len % of Target per KP abs KP vocab Twitter 4,347 1.92 71.35 4,171 Weibo 2,136 2.55 75.74 2,833 StackExchange 12,114 1.41 54.32 10,852 Table 2: Data statistics of source posts (on the top) and target keyphrases (on the bottom). Avg len: the average number of tokens. KP: keyphrases. Abs KP: absent keyphrases. |KP|: the number of distinct keyphrases. tracked the real-time trending hashtags in Jan-Aug 2014,5 and then used them as keywords to search posts with hashtag-search API.6 And the StackExchange dataset is randomly sampled from a publicly available raw corpus.7 For the target keyphrases, we employ userannotated hashtags for Twitter and Weibo following Zhang et al. (2016), and author-assigned tags (e.g., “artificial-intelligence”) for StackExchange. Posts without such keyphrase tags are hence removed from the datasets. Particularly, for StackExchange, we concatenate the question title together with its description as the source input. For Twitter and Weibo source posts, we retain tokens in hashtags (without # symbols) for those appearing in the middle of posts, since they generally act as semantic elements and thus considered as present keyphrases (Zhang et al., 2016). For those appearing before or after a post, we remove the entire hashtags and regard them as absent keyphrases as is done in Wang et al. (2019). For model training and evaluation, we split the data into three subsets with 80%, 10%, and 10%, corresponding to training, development, and test set. The statistics of the three datasets are shown in Table 2. As can be seen, over 50% of the keyphrases do not appear in their source posts, thus extractive approaches will fail in dealing with these posts. We also observe that StackExchange exhibits different keyphrase statistics compared to either Twitter or Weibo, with more keyphrases appearing in one post and more diverse keyphrases. 5http://open.weibo.com/wiki/Trends/ 6http://www.open.weibo.com/wiki/2/ 7https://archive.org/details/ stackexchange 2521 Preprocessing. For Twitter dataset, we employed Twitter preprocessing toolkit in Baziotis et al. (2017) for source post and hashtag (keyphrase) tokenization. Chinese Weibo data was preprocessed with Jieba toolkit8 for word segmentation, and English StackExchange data with natural language toolkit (NLTK) for tokenization.9 We further take the following preprocessing steps for each of the three datasets: First, posts with meaningless keyphrases (e.g., singlecharacter ones) were filtered out; also removed were non-alphabetic (for English data) and retweet-only (e.g., “RT”) posts. Second, links, mentions (@username), and digits were replaced with generic tags “URL”, “MENT”, and “DIGIT” following Wang et al. (2019). Third, a vocabulary was maintained, with 30K most frequent words for Twitter, and 50K for Weibo and StackExchange each. For BoW vocabulary of the input xbow for NTM, stop words and punctuation were removed. Parameter Settings. We implement our model based on the pytorch framework in Paszke et al. (2017). For NTM, we implement it following the design10 in Zeng et al. (2018) and set topic number K to 50. The KG model is set up mostly based on Meng et al. (2017). For its sequence encoder, we adopt two layers of bidirectional GRU and one layer of unidirectional GRU for its decoder. The hidden size of the GRU is 300 (for biGRU, 150 for each direction). For the embedding, its size is set to 150 and values are randomly initialized. We apply Adam (Kingma and Ba, 2014) with initial learning rate as 1e−3. In training process, gradient clipping = 1.0 is conducted to stabilize the training. Early-stopping strategy (Caruana et al., 2001) is adopted based on the validation loss. Before joint training, we pretrain NTM for 100 epochs and KG model for 1 epoch as the convergence speed of NTM is much slower than the KG model. We empirically set the γ = 1.0 for balancing NTM and KG loss (Eq. 14) and iteratively update the parameters in each module and then their combination in turn. Comparisons. In comparison, we first consider a simple baseline selecting majority keyphrases (henceforth MAJORITY) — the top K keyphrases ranked by their frequency in training data are used 8https://github.com/fxsjy/jieba 9https://www.nltk.org/ 10https://github.com/zengjichuan/TMN as the keyphrases for all test instances. We also compare with the following extractive baselines, where n-grams (n = 1, 2, 3) in source posts are ranked by TF-IDF scores (henceforth TF-IDF), TextRank algorighm (Mihalcea and Tarau, 2004) (henceforth TEXTRANK), and KEA system (Witten et al., 1999) (henceforth KEA). We also compare with a neural state-of-the-art keyphrase extraction model based on sequence tagging (Zhang et al., 2016) (henceforth SEQ-TAG). In addition, we take the following state-of-the-art keyphrase generation models into consideration: seq2seq model with copy mechanism (Meng et al., 2017) (henceforth SEQ2SEQ-COPY) and its variation SEQ2SEQ without copy mechanism, SEQ2SEQCORR (Chen et al., 2018) exploiting keyphrase correlations, and TG-NET (Chen et al., 2019b) jointly modeling of titles and descriptions (thereby only tested on StackExchange). 5 Experimental Results In the experiment, we first evaluate our performance on keyphrase prediction (§5.1). Then, we study whether jointly learning keyphrase generation can in turn help produce coherent topics (§5.2). At last, further discussions (§5.3) are presented with an ablation study, a case study, and an analysis for varying text genres. 5.1 Keyphrase Prediction Results In this section, we examine our performance in predicting keyphrases for social media. We first discuss the main comparison results, followed by a discussion for present and absent keyphrases. Popular information retrieval metrics macroaverage F1@K and mean average precision (MAP) are adopted for evaluation. Here for Twitter and Weibo, most posts are tagged with one keyphrase on average (Table 2), thus F1@1 and F1@3 are reported. For StackExchange, we report F1@3 and F1@5, because on average, posts have 2.4 keyphrases. MAP is measured over the top 5 predictions for all three datasets. For keyphrase matching, we consider keyphases after stemmed by Porter Stemmer following Meng et al. (2017). Main Comparison Discussion. Table 3 shows the main comparison results on our three datasets, where higher scores indicate better performance. From all three datasets, we observe: • Social media keyphrase prediction is challenging. As can be seen, all simple baselines give 2522 Model Twitter Weibo StackExchange F1@1 F1@3 MAP F1@1 F1@3 MAP F1@3 F1@5 MAP Baselines MAJORITY 9.36 11.85 15.22 4.16 3.31 5.47 1.79 1.89 1.59 TF-IDF 1.16 1.14 1.89 1.90 1.51 2.46 13.50 12.74 12.61 TEXTRANK 1.73 1.94 1.89 0.18 0.49 0.57 6.03 8.28 4.76 KEA 0.50 0.56 0.50 0.20 0.20 0.20 15.80 15.23 14.25 State of the arts SEQ-TAG 22.79±0.3 12.27±0.2 22.44±0.3 16.34±0.2 8.99±0.1 16.53±0.3 17.58±1.6 12.82±1.2 19.03±1.3 SEQ2SEQ 34.10±0.5 26.01±0.3 41.11±0.3 28.17±1.7 20.59±0.9 34.19±1.7 22.99±0.3 20.65±0.2 23.95±0.3 SEQ2SEQ-COPY 36.60±1.1 26.79±0.5 43.12±1.2 32.01±0.3 22.69±0.2 38.01±0.1 31.53±0.1 27.41±0.2 33.45±0.1 SEQ2SEQ-CORR 34.97±0.8 26.13±0.4 41.64±0.5 31.64±0.7 22.24±0.5 37.47±0.8 30.89±0.3 26.97±0.2 32.87±0.6 TG-NET 32.02±0.3 27.84±0.3 34.05±0.4 Our model 38.49±0.3 27.84±0.0 45.12±0.2 34.99±0.3 24.42±0.2 41.29±0.4 33.41±0.2 29.16±0.1 35.52±0.1 Table 3: Main comparison results displayed with average scores (in %) and their standard deviations over the results with 5 sets of random initialization seeds. Boldface scores in each column indicate the best results. Our model significantly outperforms all comparisons on all three datasets (p < 0.05, paired t-test). poor performance. This indicates that predicting keyphrases for social media language is a challenging task. It is impossible to rely on simple statistics or rules to yield good results. • Seq2seq-based keyphrase generation models are effective. Compared to the extractive baselines and SEQ-TAG, seq2seq-based models perform much better. It is because social media’s informal language style results in a large amount of absent keyphrases (Table 2), which is impossible for extractive methods to make correct predictions. We also find SEQ2SEQ-COPY better than SEQ2SEQ, suggesting the effectiveness to combine source word extraction with word generation when predicting keyphrases. • Latent topics are consistently helpful for indicating keyphrases. It is observed that our model achieves the best results, significantly outperforming all comparisons by a large margin. This shows the usefulness of leveraging latent topics in keyphrase prediction. Interestingly, compared with StackExchange, we achieve larger improvements for Twitter and Weibo, both exhibiting more informal nature and prominent word order misuse. For such text genres, latent topics, learned under BoW assumption, are more helpful. Also, the following interesting points can be observed by comparing results across datasets: • Keyphrase generation is more challenging for StackExchange. When MAP scores of seq2seq-based methods are compared over the three datasets, we find that the scores on StackExchange are generally lower. It is probably attributed to the data characteristics of more diverse keyphrases and larger target vocabulary (Table 2). • Twitter and Weibo data is noisier. We notice that TF-IDF, TEXTRANK, and KEA perform much worse than MAJORITY, while the opposite is observed on StackExchange. It is because Twitter and Weibo, as microblogs, contain shorter posts (Table 2) and exhibit more informal language styles. In general, models relying on simple word statistics would suffer from such noisy data. Twitter Weibo StackExchange 40 50 60 70 80 90 (a) Present F1@1 (%) Seq-Tag Seq2Seq Seq2Seq-Copy Seq2Seq-Corr TG-Net Our model Twitter Weibo StackExchange 20 25 30 35 40 45 (b) Absent R@5 (%) Seq2Seq Seq2Seq-Copy Seq2Seq-Corr TG-Net Our model Figure 2: The prediction results for present (on the top) and absent keyphrases (on the bottom, R@5: recall@5). For present cases, from left to right shows the results of SEQ-TAG, SEQ2SEQ, SEQ2SEQ-COPY, SEQ2SEQ-CORR, TG-NET (only for StackExchange), and our model. For absent cases, models (except SEQTAG) are shown in the same order. Present and Absent Keyphrase Prediction. We further discuss how our model performs in producing present and absent keyphrases. The comparison results with all neural-based models are shown in Figure 2. Here F1@1 is adopted for evaluating the prediction of present keyphrases and recall@5 for absent keyphrases. 2523 Datasets Twitter StackExchange LDA 41.12 35.13 BTM 43.12 43.52 NTM 43.82 43.04 Our model 46.28 45.12 Table 4: CV topic coherence score comparison on our two English datasets. Higher scores indicate better coherence. Our model produces the best scores. The results indicate that our model consistently outperforms comparison models in predicting either absent or present keyphrases. Also, interestingly, copy mechanism seems to somehow sacrifice the performance on absent keyphrase generation for correctly extracting the present ones. Such side effects, however, are not observed on our model. It is probably attributed to our ability to associate posts with corpus-level topics, hence enabling absent keywords from other posts to be “copied”. This observation also demonstrates the latent topics can help our model to better decide whether to copy (Eq. 10). 5.2 Latent Topic Analysis We have shown latent topics useful for social media keyphrase generation in §5.1. Here we analyze whether our model can learn meaningful topics. Coherence Score Comparison. We first evaluate topic coherence with an automatic CV measure. Here we employ Palmetto toolkit11 (R¨oder et al., 2015) on the top 10 words from each latent topic following Zeng et al. (2018). The results are only reported on English Twitter and StackExchange because Palmetto does not support Chinese. For comparisons, we consider LDA (implemented with a gensim LdaMulticore package12), BTM13 (Yan et al., 2013) (a state-of-theart topic model specifically for short texts), and NTM (Miao et al., 2017). For LDA and BTM, we run Gibbs sampling with 1, 000 iterations to ensure convergence. From the results in Table 4, we observe that our model outperforms all the comparison topic models by large margins, which implies that jointly exploring keyphrase generation can in turn help produce coherent topics. 11https://github.com/dice-group/ Palmetto/ 12https://pypi.org/project/gensim/ 13https://github.com/xiaohuiyan/BTM Sample Topics. To further evaluate whether our model can produce coherent topics qualitatively, we probe into some sample words (Table 5) reflecting the topic “super bowl” discovered by various models from Twitter. As can be seen, there are mixed non-topic words 14 in LDA’s, BTM’s, and NTM’s sample topic. Compared with them, our inferred topic looks more coherent. For example, “steeler” and “packer”, names of super bowl teams, are correctly included into the cluster. LDA bowl super quote steeler jan watching egypt playing glee girl BTM bowl super anthem national christina aguilera fail word brand playing NTM super bowl eye protester winning watch halftime ship sport mena Our model bowl super yellow green packer steeler nom commercial win winner Table 5: Top 10 terms for latent topics “super bowl”. Red and underlined words indicate non-topic words. 5.3 Further Discussions Ablation Study. We compare the results of our full model and its four ablated variants to analyze the relative contributions of topics on different components. The results in Table 6 indicate the competitive effect of topics on decoder attention and that on hidden states, but combining them both help our full model achieve the best performance. We also observe that pre-trained topics only bring a small boost, indicated by the close scores yielded by our model (separate train) and SEQ2SEQ-COPY. This suggests that the joint training is crucial to better absorb latent topics. Model Twitter Weibo SE SEQ2SEQ-COPY 36.60 32.01 31.53 Our model (separate train) 36.75 32.75 31.78 Our model (w/o topic-attn) 37.24 32.42 32.34 Our model (w/o topic-state) 37.44 33.48 31.98 Our full model 38.49 34.99 33.41 Table 6: Comparison results of our ablation models on three datasets (SE: StackExchange) — separate train: our model with pre-trained latent topics; w/o topic-attn: decoder attention without topics (Eq. 7); w/o topicstate: decoder hidden states without topics (Eq. 5). We report F1@1 for Twitter and Weibo, F1@3 for StackExchange. Best results are in bold. 14Non-topic words refer to words that cannot clearly indicate the corresponding topic, including off-topic words more likely to reflect other topics. 2524 Case Study. We feed the tweet S in Table 1 into both SEQ2SEQ-COPY and our model. Eventually our model correctly predicts the keyphrase as “super bowl” while SEQ2SEQ-COPY gives a wrong prediction “team follow back” (posted to ask other to follow back). To analyze the reason behind, we visualize the attention weights of two models in Figure 3. It can be seen that both models highlight the common word “team”, which frequently appears in “team follow back”-tagged tweets. By joint modeling of latent topics, our model additionally emphasizes topic words “yellow” and “pants”, which are signals indicating a super bowl team steeler (also reflected in the 1st topic) and thus helpful to correctly generate “super bowl” as its keyphrase. Without such topic guidance, SEQ2SEQ-COPY wrongly predicts a common but unrelated term “team follow back”. wife paying attention game want team yellow pants win super bowl (a) Seq2Seq-Copy super bowl (b) Our Model 1st Topic steeler national team packer win Figure 3: Attention visualization for the sample post in Table 1. Only non-stopwords are selected. The table below shows the top five words for the 1st topic. Topic-Aware KG for Other Text Genres. We have shown the effectiveness of latent topics on social media keyphrase generation. To examine how they affect in identifying keyphrases for welledited language, we also experiment on the traditional scientific article datasets (Meng et al., 2017), but limited improvements are observed. Latent topics can better help keyphrase generation on social media, probably because there are larger proportion of keyphrases with absent words (Figure 4), where latent topics can cluster relevant posts and enrich the source contexts. Another possible reason lies in that social media language exhibits prominent arbitrary word orders. Thus latent topics, learned under BoW assumption, can better provide useful auxiliary features. 1 2 3 >3 10 20 30 40 50 60 70 80 90 Proprotion of Absent Keyphrase (%) Twitter Weibo StackExchange KP20k Inspec Krapivin SemEval NUS Figure 4: Proportion of absent n-gram keyphrases (n: 1, 2, 3, > 3). The dashed lines with ‘*’ marks: the five scientific article datasets used in Meng et al. (2017). 6 Conclusion and Future Work We have presented a novel social media keyphrase generation model that allows the joint learning of latent topic representations. Experimental results on three newly constructed social media datasets show that our model significantly outperforms state-of-the-art methods in keyphrase prediction, meanwhile produces more coherent topics. Further analysis interprets our superiority to discover key information from noisy social media data. In the future, we will explore how to explicitly leverage the topic-word distribution to further improve the performance. Also, our topic-aware neural keyphrase generation model can be investigated in a broader range of text generation tasks. Acknowledgements This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund). We thank ACL reviewers for their insightful suggestions on various aspects of this work. References Haoli Bai, Zhuangbin Chen, Michael R. Lyu, Irwin King, and Zenglin Xu. 2018. Neural relational topic models for scientific article analysis. In Proceedings of ACM International Conference on Information and Knowledge Management. Christos Baziotis, Nikos Pelekis, and Christos Doulkeridis. 2017. Datastories at semeval-2017 task 4: Deep LSTM with attention for message-level and topic-based sentiment analysis. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2525 David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. 2016. Variational inference: A review for statisticians. CoRR, abs/1601.00670. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research. Rich Caruana, Steve Lawrence, and C Lee Giles. 2001. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Proceedings of Neural Information Processing Systems. Jun Chen, Xiaoming Zhang, Yu Wu, Zhao Yan, and Zhoujun Li. 2018. Keyphrase generation with correlation constraints. In Proceedings of Empirical Methods in Natural Language Processing. Wang Chen, Hou Pong Chan, Piji Li, Lidong Bing, and Irwin King. 2019a. An integrated approach for keyphrase generation via exploring the power of retrieval and extraction. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Wang Chen, Yifan Gao, Jiani Zhang, Irwin King, and Michael R. Lyu. 2019b. Title-guided encoding for keyphrase generation. In Proceedings of AAAI Conference on Artificial Intelligence. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of Empirical Methods in Natural Language Processing. Sujatha Das Gollapalli, Xiaoli Li, and Peng Yang. 2017. Incorporating expert knowledge into keyphrase extraction. In Proceedings of AAAI Conference on Artificial Intelligence. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representations. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Jing Li, Yan Song, Zhongyu Wei, and Kam-Fai Wong. 2018. A joint model of conversational discourse and latent topics on microblogs. Journal of Computational Linguistics. Zhiyuan Liu, Wenyi Huang, Yabin Zheng, and Maosong Sun. 2010. Automatic keyphrase extraction via topic decomposition. In Proceedings of Empirical Methods in Natural Language Processing. Zhiyuan Liu, Peng Li, Yabin Zheng, and Maosong Sun. 2009. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of Empirical Methods in Natural Language Processing. Olena Medelyan, Eibe Frank, and Ian H. Witten. 2009. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of Empirical Methods in Natural Language Processing. Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of Association for Computational Linguistics. Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Sujian Li, and Houfeng Wang. 2012. Entity-centric topic-oriented opinion summarization in twitter. In Proceedings of ACM International Conference on Knowledge Discovery and Data Mining. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In Proceedings of International Conference on Machine Learning. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of Empirical Methods in Natural Language Processing. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In Proceedings of Neural Information Processing Systems. Michael R¨oder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of ACM International Conference on Web Search and Data Mining. Derek Ruths and J¨urgen Pfeffer. 2014. Social media for large studies of behavior. Journal of Science. Gerard Salton and Michael J McGill. 1986. Introduction to modern information retrieval. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of Association for Computational Linguistics. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. arXiv preprint arXiv:1703.01488. Xiaojun Wan and Jianguo Xiao. 2008. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of AAAI Conference on Artificial Intelligence. Yue Wang, Jing Li, Irwin King, Michael R. Lyu, and Shuming Shi. 2019. Microblog hashtag generation via encoding conversation contexts. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Jianshu Weng and Bu-Sung Lee. 2011. Event detection in twitter. In Proceedings of AAAI conference on weblogs and social media. 2526 Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-Manning. 1999. KEA: practical automatic keyphrase extraction. In Proceedings of ACM conference on Digital Libraries. Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of international conference on World Wide Web. Jichuan Zeng, Jing Li, Yulan He, Cuiyun Gao, Michael R. Lyu, and Irwin King. 2019. What you say and how you say it: Joint modeling of topics and discourse in microblog conversations. Transactions of Association for Computational Linguistics. Jichuan Zeng, Jing Li, Yan Song, Cuiyun Gao, Michael R. Lyu, and Irwin King. 2018. Topic memory networks for short text classification. In Proceedings of Empirical Methods in Natural Language Processing. Qi Zhang, Yang Wang, Yeyun Gong, and Xuanjing Huang. 2016. Keyphrase extraction using deep recurrent neural networks on twitter. In Proceedings of Empirical Methods in Natural Language Processing. Yingyi Zhang, Jing Li, Yan Song, and Chengzhi Zhang. 2018. Encoding conversation context for neural keyphrase extraction from microblog posts. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
2019
240
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2527–2537 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2527 #YouToo? Detection of Personal Recollections of Sexual Harassment on Social Media Arijit Ghosh Chowdhury∗ Manipal Institute of Technology [email protected] Ramit Sawhney∗ Netaji Subhas Institute of Technology [email protected] Rajiv Ratn Shah MIDAS, IIIT-Delhi [email protected] Debanjan Mahata Bloomberg [email protected] Abstract The availability of large-scale online social data, coupled with computational methods, can help us answer fundamental questions relating to our social lives, particularly our health and well-being. The #MeToo trend has led to people talking about personal experiences of harassment more openly. This work attempts to aggregate such experiences of sexual abuse to facilitate a better understanding of social media constructs and to bring about social change. It has been found that disclosure of abuse has positive psychological impacts. Hence, we contend that such information can be leveraged to create better campaigns for social change by analyzing how users react to these stories and can be used to obtain a better insight into the consequences of sexual abuse. We use a three-part TwitterSpecific Social Media Language Model to segregate personal recollections of sexual harassment from Twitter posts. An extensive comparison with state-of-the-art generic and specific models along with a detailed error analysis explores the merit of our proposed model. 1 Introduction Global estimates indicate that about 1 in 3 women worldwide has experienced either physical and/or sexual intimate partner violence or non-partner sexual violence in their lifetime 1. The hashtag #MeToo has been prevalent on Twitter as a campaign centered around sharing stories of sexual harassment in the act of solidarity with other victims and spreading awareness of a widespread and endemic issue. With vast amounts of people sharing their recollections of sexual harassment on the Internet, it is important that we make scientific use * denotes equal contribution. 1https://www.who.int/news-room/factsheets/detail/violence-against-women of this data to increase awareness and enable realworld change. Manually sorting and comprehending the information shared in these stories is an arduous task. Hence our work can serve as the missing link between online activism and real change. Health information seeking and sharing practices online have been known in helping people cope with mental health problems (De Choudhury and De, 2014). Studies have shown that online forums and support groups provide a conducive environment allowing people to get connected with others who share similar stories, thus act as a path to obtaining help and advice around mental health problems (Eysenbach et al., 2004). Moreover, self-disclosure is therapeutic for mental health communities (Johnson and Ambrose, 2006). Our study proposes a Twitter-Specific Social Media Language Model for the aggregation of tweets containing personal stories of sexual harassment. Manikonda et al. (2018) have carried out a preliminary analysis of the user engagement, discussion topics, word connotations, and sentiment concerning the #metoo movement. Andalibi et al. (2016) have explored anonymity and support seeking during the #metoo movement. However, very few studies have attempted to separate texts containing discussions about sexual harassment from texts containing personal stories of sexual harassment experiences. Efforts have been made to aggregate domestic abuse stories from Reddit by Schrading et al. (2015). Karlekar and Bansal (2018a) have attempted to categorize personal stories into categories like ogling, commenting, groping. Our study aims to help this body of research grow by automating the process of collection of tweets containing recollections of sexual harassment. 2528 1.1 Clinical Perspective Prior research in psychology has demonstrated the importance of social support in combating depression (George et al., 1989). It is argued that social intimacy, social integration, nature of social networks as well as the individual perception of being supported by others are important and essential to quick recovery from mental health problems (Caplan and Turner, 2007). The internet is increasingly used for seeking and sharing health information online, and such activity is known to have connections to health-related behaviors ((Sillence et al., 2007); (Liu et al., 2013)). Online support groups are popular sources of information and support for many internet users (White and Dorman, 2001). These forums tend to be very different from similar offline groups; for instance, people are likely to discuss problems that they do not feel comfortable discussing in person (Johnson and Ambrose, 2006). Moreover, such online health communities are known to foster wellbeing, a sense of control, self confidence, social interactions, and improved feelings. In the context of mental health in particular, Moreno et al. (2011) demonstrated that status updates on Facebook could reveal symptoms of major depressive episodes, while Park et al. (2013) found differences in the perception of Twitter use between depressed and non-depressed users: the former found value in Twitter due to the ability to garner social awareness and engage in emotional interaction. We have presented, to the best of our knowledge, the first comprehensive dataset and methodology for detection of personal stories of sexual harassment on Twitter. We carry out extensive comparisons of our proposed Medium Specific Social Media Language Model with respect to baselines. Our work may provide a wealth of resources to clinicians, health practitioners, caregivers, and policy makers to identify communities at risk. 2 Related Work Natural language processing (NLP) techniques can be used to make inferences about peoples mental states from what they write on Facebook, Twitter, and other social media. These inferences can then be used to create online pathways to direct people to health information and assistance and also to generate personalized interventions. Regrettably, the computational methods used to collect, process, and utilize online writing data, as well as the evaluations of these techniques, are still dispersed in the literature. Wekerle et al. (2018) have shown that Twitter is being used for increasing research on sexual violence. Using social media could support at-risk youth, professionals, and academics given the many strengths of employing such a knowledge mobilization tool. Sawhney et al. (2018) have worked on the detection of posts containing suicidal ideation on Twitter. Social media use is free, easy to implement, available to difficult to access populations (e.g., victims of sexual violence), and can reduce the gap between research and practice. Bogen et al. (2018) discusses the social reactions to disclosures of sexual victimization on Twitter. This work suggests that online forums may offer a unique context for disclosing violence and receiving support. Khatua et al. (2018) have explored deep learning techniques to classify tweets of sexual violence, but have not specifically focused on building a robust system that can detect recollections of personal stories of abuse. Schrading et al. (2015) created the Reddit Domestic Abuse Dataset, to facilitate classification of domestic abuse stories using a combination of SVM and N-grams. Karlekar and Bansal (2018b) improved upon this by using CNN-LSTMs, due to the complementary strengths of both these architectures. Reddit allows lengthy submissions, unlike Twitter, and therefore the use of standard English is more common. This allows natural language processing tools trained on standard English to function better. Our method explores the merits of using a Twitter-Specific Language Model which can counter the shortcomings of using pre-trained word embeddings derived from other tasks, on a medium like Twitter where the language is informal, and the grammar is often ambiguous. A growing body of work has demonstrated that social media is an increasingly adopted platform allowing users to communicate around a variety of health concerns ((Paul and Dredze, 2011); (Andalibi et al., 2016)). Newman et al. (2011) interviewed people with significant health concerns who participated in both OHCs and Facebook. Oh et al. (2013) examined peoples use of Facebook for health purposes and showed that emotional support was a significant predictor of health selfefficacy. Manikonda et al. (2018) try to investi2529 gate social media posts discussing sexual abuse by analyzing factors such as linguistic themes, social engagement, and emotional attributes. Their work proves that Twitter is an effective source for human behavior analysis, based on several linguistic markers. Andalibi et al. (2016) attempt to characterize abuse related disclosures into different categories, based on different themes, like gender, support seeking nature etc. Our study aims to bridge the gap between gathering information and analyzing social media disclosures of sexual abuse. Our approach suggests that the language used on Twitter can be treated as a separate language construct, with its own rules and restrictions that need to be addressed to capture subtle nuances and understand the context better. 3 The Sexual Harassment Recollection (SHR) Dataset 3.1 Data Collection One of the foremost challenges with detecting personal recollections of sexual harassment is the lack of availability of a public dataset due to privacy and anonymity concerns borne out of social stigma associated with sexual harassment. Motivated by the need to create a fresh dataset, a corpus of words and phrases were developed using anonymized data from known Sexual Harassment forums. Between November 2016 and December 2018, these forums were scraped for the user posts and human annotators were asked to identify if these posts were related to sexual harassment. In addition to this, user posts (containing tags of sexual abuse) from the micro-blogging websites Reddit were collected and added to this collection. These were subsequently human annotated based on them containing personal recollections of sexual harassment or not. Then, Term Frequency/Inverse Document Frequency (TF-IDF) method was applied to this set of manually annotated texts to identify terms which frequently appear in the texts belonging to the Recollection class and less frequently in the Non-Recollection class. These terms play a role in differentiating between the two classes. Finally, manual annotators were asked to remove any terms from this list, which were not based on sexual harassment as well as duplicate terms. This gave a final lexicon of 70 terms consisting of but not limited to the phrases/words of Table 1. The public Streaming API offered by Twitter allows programmatic was assaulted molested me raped me touched me groped I was stalked forced me #WhyIStayed #WhenIwas #NotOkay abusive relationship drugged underage inappropriate followed boyfriend workplace #sexualharassment #notallmen #mentoo #timesup #womensreality #EverydaySexism Table 1: Words/Phrases linked with Sexual Harassment collection of tweets as they occur, filtered by specific criteria. Using the same, anonymized data was collected from Twitter. The tweets retrieved from Twitter using the API contain extraneous information. It can be associated with a URL, user mention, media files(image, audio, and video), timestamp, number of retweets. For the tasks in this paper, the text from each tweet was extracted while the rest of the information about the tweet was discarded. Although the tweets were collected from the ’stream’ based on sexual harassment earlier developed, the exact sentiment of the tweets was unknown. Tweets about sexual harassment could be related to other things as well. Eg. Sexual harassment awareness campaign and prevention, a news report, sarcasm etc. This made a manual annotation of the dataset imperative for better accuracy. 3.2 Data Annotation The final dataset consisting of 5119 text sentences from different tweets was then, manually annotated by two humans, one, a student of gender studies, the other, a student of clinical psychology and an outside annotator (a teacher of gender studies and a non activist feminist) who helped the annotators with conflicts, reviewing our annotations to mitigate bias and confusions. Tweets annotated as Recollection are labeled as 1 and the rest of the tweets are labeled 0. To reliably identify disclosure, we clearly define a tweet to be labeled as Recollection if it explicitly mentions a personal recollection of sexual harassment; e.g. I was molested by ex-boyfriend. Sexual Harassment in our case entails a broader definition 2530 of this term, which includes sexual abuse, sexual assault, rape and sexual harassment. The remaining tweets not marked as Recollection belonged to one of the following categories: • Awareness related tweets; e.g. Do you know what the consequences of domestic violence include? Learn more here ¡url¿ #feminism #meToo • Flippant references; e.g. Dude, I can’t play Fortnite! I got raped there, haha meToo • News reports and incidents; e.g. In an exclusive interview with BBC Asian Network, bollywood superstar @iamsrk speaks about the #meToo movement, film censorship and #Brexit. • Tweets describing other’s experiences; e.g. My best friend was sexually assaulted. #meToo #assault • Tweets using #meToo in a different context; e.g. Yumm! I’m starving for spring rolls too #meToo #chinese • Other remaining tweets; e.g., So the #meToo movement doesn’t apply to democrats? Oh ok, got it., Exploiting the #meToo movement for political gain? Not cool. Finally, after an agreement between the annotators (using majority decision to label the mixed cases), 1126 tweets in the dataset (22% of the dataset) were annotated as Recollection with an average value of Cohen Kappas inter-annotator agreement κ = 0.83, while the rest fell into the category of Discussion. Our dataset will be made publicly available, following the guidelines mentioned in Section 7 to facilitate further research and analysis on this very pertinent issue 2. 3.3 Preprocessing The following preprocessing steps were taken as a part of noise reduction: • Extra white spaces, newlines, and special characters were removed from the sentences. • Stopwords corpus was taken from NLTK and was used to eliminate words which provide little to no information about individual tweets (Loper and Bird, 2002). 2github.com/arijit1410/ACL2019-YouToo • URLs, screen names(username), hashtags(#), digits(0-9), and all Non-English words were removed from the dataset 3 4 The Social Media Language Model (SMLM) Our work considers deep learning techniques for the detection of social media disclosures of sexual harassment. The majority of methods used to study NLP problems employing shallow machine learning models and time-consuming, hand-crafted features suffer from dimensionality problems since linguistic information is usually represented with sparse representations (highdimensional features). (Khatua et al., 2018). Bagof-words approaches tend to have high recall but lead to high rates of false positives because lexical detection methods classify all messages containing particular terms only. CNNs also have been able to generate state of the art results in text classification because of their ability to extract features from word embeddings (Kim, 2014). Recent approaches that concatenate embeddings derived from other tasks with the input at different layers (Maas et al. (2011)) still train from scratch and treat pre-trained embeddings as fixed parameters, limiting their usefulness. A language model that possesses universal properties could be useful in cases where there is a lack of annotated datasets or language resources, which is prevalent in NLP research. We propose a three-part Classification method, based on the Universal Language Model Fine-tuning (ULMFiT) architecture, introduced by (Howard and Ruder, 2018) that enables robust inductive transfer learning for any NLP task, akin to fine-tuning ImageNet models: We use the 3-layer AWD-LSTM architecture proposed by Merity et al. (2017) using the same hyperparameters and no additions other than tuned dropout hyperparameters. Dropouts have been successful in feed-forward and convolutional neural networks, but applying dropouts similarly to an RNNs hidden state is ineffective as it disrupts the RNNs ability to retain long-term dependencies, and may cause overfitting. Our proposed method makes use of DropConnect (Merity et al., 2017), in which, instead of activations, a randomly selected subset of weights within the 3https://abiword.github.io/enchant/ 2531 Text Label # WhenIWas 15 I was molested by my best friend 1 I was sexually assaulted by my step brother in 2009. 1 At 8 years old, an aldult family member sexually assaulted me. 1 I was 7 the first time I was sexually assaulted. 1 I was sexually assaulted by at least 3 different babysitters by the time I was 6 years old. 1 #Me too campaign stop sexual harassment and sexual assault. 0 Trying to silence sexual assault victims is another one. The list goes on and on 0 Then call for people that cover up sexual assault like Jim Jordan to resign??? 0 sexual assault on public transport is real 0 agreed! metoo is not just exclusively for women! 0 Table 2: Example tweets from the annotated dataset Figure 1: The Social Media Language Model Overview network is set to zero. Each unit thus receives input from a random subset of units in the previous layer. By performing dropout on the hiddento-hidden weight matrices, overfitting can be prevented on the recurrent connections of the LSTM. 4.1 Classification The Language Model (LM) is trained from a large corpus of unlabeled data. In this case a pretrained Wikipedia Language Model was used. This Language Model is then used as the basis to train a Twitter Model (TM) from unlabeled data that matches the desired medium of the task (e.g. forum posts, newspaper articles or tweets). In our study the weights of the pre-trained Language Model are slowly retrained on a subset of the Twitter Sentiment140 dataset 4. This augmented vocabulary improves the model’s domain understanding of Tweet syntax and semantics. Finally, a binary classifier is trained on top of the Twitter Model from a labeled dataset. This approach facilitates the reuse of pre-trained models for the lower layers. 4https://www.kaggle.com/kazanova/sentiment140 5 Experiment Setup 5.1 Baselines In order to make a fair comparison between all the models mentioned above, the experiments are conducted with respect to certain baselines. Schrading et al. (2015) proposed the Domestic Abuse Disclosure (DAD) Model using the 1, 2, and 3-grams in the text, the predicates, and the semantic role labels as features, including TF-IDF and Bag of Words. Andalibi et al. (2016) used a Self-Disclosure Analysis (SDA) Logistic Regression model with added features like TF-IDF and Char-N-grams, to characterize abuse-related disclosures by analyzing word occurrences in the texts. In the experiments, we also evaluate and compare our model with several widely used baseline methods including: RNN (Liu et al., 2016), LSTM/Bi-LSTM (Merity et al., 2017), CNN (Kim, 2014), Character-Level Convolutional Network (CL-CNN) (Zhang et al., 2015), fastText (Joulin et al., 2017), Hierarchical Attention Networks (HATT) (Yang et al., 2016), and an Atten2532 Architecture Accuracy Precision Recall F1 DAD Model 0.91 0.90 0.91 0.90 SDA Model 0.90 0.87 0.90 0.88 Word-CNN 0.92 0.68 0.95 0.79 LSTM 0.92 0.70 0.98 0.81 RNN 0.93 0.86 0.95 0.90 CL-CNN 0.92 0.70 0.91 0.79 fastText-BOT 0.87 0.70 0.80 0.74 HATT 0.93 0.93 0.95 0.93 Bi-LSTM 0.93 0.86 0.98 0.91 RCNN 0.90 0.86 0.90 0.87 CNN-LSTM 0.94 0.93 0.94 0.94 Attentional Bi-LSTM 0.93 0.90 0.98 0.93 A-CNN-LSTM 0.94 0.92 0.98 0.94 openAI-Transformer 0.95 0.94 0.96 0.94 SMLM 0.96 0.95 0.97 0.96 Table 3: Performance Comparisons on the SHR Dataset Task (Twitter) Architecture Accuracy Precision Recall F1 Ours SMLM + No Augmented Vocab 0.92 0.86 0.95 0.90 Ours SMLM + Augmented Vocab 0.96 0.95 0.97 0.96 Stance Detection SMLM + No Augmented Vocab 0.52 0.52 0.54 0.51 Stance Detection SMLM + Augmented Vocab 0.67 0.66 0.63 0.64 Hate Speech SMLM + No Augmented Vocab 0.90 0.90 0.91 0.90 Hate Speech SMLM + Augmented Vocab 0.93 0.93 0.94 0.93 Table 4: Variation in performance with the inclusion of augmented vocabulary on Twitter Datasets tion Based CNN-LSTM (A-CNN-LSTM) (Yuan et al., 2018). The Transformer based Language Model (Vaswani et al. (2017) ; Ritter et al. (2010)) was used to compare the performance of Language Model based architectures. For RNN and LSTM, pre-trained Glove word embeddings which were trained on 2 billion tweets are used as features for classification. ReLU activation function (Nair and Hinton, 2010) was used for the CNN layers in the CNN-LSTM Models. A dropout probability of 0.2 was used. The batch size was chosen to be 64, and a total number of epochs were 25. The Adam optimizer was used for all the models (Kingma and Ba, 2014) along with a learning rate of 0.001. A small subset (10%) of the dataset is held back for testing on unseen data. 5.2 SMLM Architectures and Parameters Our method uses the Weight Dropped AWDLSTM architecture (Merity et al., 2017). Embedding size is 400, the number of hidden activations per layer is 1150, and the number of layers used is 3. Two linear blocks with batch normalization and dropout have been added to the model, with rectified linear unit activations for the intermediate layer and a softmax activation at the last layer. The models use different configurations for back-propagation through time (BPTT), learning rate (LR), weight decay (WD), dropouts, cyclical learning rates (CLR) (Smith (2017)) and slanted triangular learning rates (STLR) (Howard and Ruder (2018)). Additionally, gradient clipping (Pascanu et al. (2013) has been applied to some of the models. The RNN hidden-to-hidden matrix uses a weight dropout for all the models. We train the models for 15 epochs. For the CLR the four parameters are maximum to minimum learning rate divisor, cooldown percentage, maximum momentum, and minimum momentum in that order. For the STLR, the parameters are maximum to minimum learning rate divisor and cut fract. Cut fract is the fraction of iterations we increase the LR. The dropout used by Howard and Ruder (2018) are ( Input Layer → 2533 0.25, General Layer →0.1, LSTM Internal →0.2, Embedding Layer →0.02, Between LSTM Layers →0.15 ). • Language Model (LM) - Batch Size →32, BPTT →70, Gradient Clipping →(0.4, 0.12), STLR ratio →32 ,cut fract →0.1, CLR →(10, 10, 0.95, 0.85). Weight Dropout →0.5. The Adam optimizer has been used. • Twitter Model (TM) - Batch Size →32, BPTT →70, Weight Decay →0.0000001. The model is gradually unfrozen (Howard and Ruder (2018)) by unfreezing the last layer first and then unfreezing all subsequent layers. STLR ratio →32 and a cut fract → 0.5 were used after the last layer was unfrozen, and an STLR ratio →20 and a cut fract →0.1 was used when all layers were unfrozen. • Recollection Model (RM) - Learning Rate →0.3, Batch Size →52, BPTT →70, Weight Decay →0.0000001, Cyclical Learning Rates →(10, 10, 0.98, 0.85) are used. The model is gradually unfrozen layer by layer with the same hyper-parameters applied to each layer. The Howard dropouts are applied with a multiplier of 1.8 and no gradient clipping is applied. The Adam optimizer is used. 5.3 Further Exploration The Twitter Model(TM) in our proposed method enables fine-tuning of the language model on a large corpus of domain-specific data. To validate that the model is generic, and to show that the addition of an augmented vocabulary boosts the performance of classifiers across other tasks as well, where the training dataset is relatively small, we compare the performance of the SMLM on several publicly available small datasets, with and without the extended vocabulary. The Political Stance Detection Dataset (SemEval-2016 Task 6) uses a small dataset of 4163 tweets for classification 5. The labeled data provided consists of a target topic, a Tweet that pertains to it, and stance of the Tweet towards the target. The data is already split into a training set (containing 2,914 Tweets) and a test set (containing 1,249 Tweets). We also test the SMLM model on the Twitter Hate-Speech 5http://alt.qcri.org/semeval2016/task6/ dataset created by Davidson et al. (2017) 6. The tweets are labelled as hate-speech, offensive and neutral. We augment the language model with the same subset of 100,000 tweets that we have used for the SMLM model. 6 Results and Analysis 6.1 Performance Table 3 describes the performance of the baseline classifiers as well as the deep learning models based on four evaluation metrics. The Social Media Language Model outperforms all baseline models, including RNNs, LSTMs, CNNs, and the linear DAD and SDA models. The A-CNN-LSTM and the Hierarchical Attention Model has a high recall due to its ability to better capture long term dependencies. The attention mechanism allows the model to retain some important hidden information when the sentences are quite long. CL-CNNs may generate unusual words as they would suffer from a higher perplexity due to the nature of prediction (character-bycharacter). Also, longer training time can lead to vanishing gradients. The fastText model is able to generate embeddings quicker but performs similarly to the CL-CNN model. The AWD-LSTM architecture used in the Social Media Language Model is able to avoid catastrophic forgetting. The main benefit, however of the ULMFit based Social Media Language Model is that it can perform classifier re-training with a very limited amount of data. The openAI-Transformer model comes a close second in terms of performance. 6.2 Generic Nature of the SMLM Model Results show that augmenting the training data with additional domain-specific data (i.e., Tweets) helps to obtain better F1-scores for the segregation of tweets containing instances of personal experiences of sexual harassment. Table 4 shows that addition of this augmented vocabulary can be extended to other tasks on twitter also, with limited training data, implying that our proposed model has the potential to be generic across other medium specific tasks as well. We make the following observations. • Our fine-tuned language model can generalize to the unstructured and messy language syntax of Tweets. 6https://data.world/crowdflower/hate-speechidentification 2534 • The SMLM model can achieve an improved F1 score with minimal task-specific customization for each model and with limited computing resources. 6.3 Error Analysis An analysis has been done to show which texts lead to erroneous and a possible explanation of why that might have been the case (Table 5). L is the correct label, and M is label predicted by the SMLM model. • T1: This text has a flippant tone. However, the system cannot pickup this nuance because it does not understand the casual nature of the discussion and the misplaced use of the term ”rape”. • T2: Here, someone is referring to another person’s recollection. However, this text contains all the linguistic markers associated with assault disclosure. • T3: Here, readers can pick up the context of this being a probable recollection of sexual harassment by a teacher when the author was 12. The system cannot pickup the context, the same way a human can, based on previous trends in other tweets. • T4: The system cannot pickup the meaning of the word ”metoo survivor”. A human can associate the term ”survivor” with ”metoo” if they have context from other tweets in which people talk about they have survived sexual abuse and harassment. • T5 The current training dataset lacks in terms of a broad range of phrases that can imply sexual harassment. • T6: The sentence, although in the first person, refers to someone else’s experience. • T7- In this case, the user assumes that a majority of the readers will be able to gather context from the amount of information provided. However, the system is unable to pick up this nuance because of lack of information about current events. Specifically, the system does not have prior information on who Dr. Ford is. 7 Ethical considerations and limitations Research with sexual assault victim-survivors can present heightened ethical challenges. This means that research on this topic must be handled with particular skill, care and respect. We address the following limitations : • Confidentiality: Individual consent from users was not sought as the data was publicly available and attempts to contact the author for research participation could be deemed coercive and may change user behavior. For instance, some victims may be deterred from coming forward if they knew they are being ”tracked” by algorithms. • Justice: The exhaustive nature of training data introduces bias in terms of how representative the dataset and hence the trained model is of an underlying community. While it’s not possible to capture all demographics, we try to maximize our coverage by building our dataset in two phases by first developing a lexicon from various microblogging sites. Any potential benefits of a project should be balanced carefully against the potential to cause harm. If bias is present, the benefits of the research are not shared across the community. • Potential Misrepresentation: Although our work attempts to analyze aspects of users’ nuanced and complex experiences, we acknowledge the limitations and potential misrepresentations that can occur when researchers analyze social media data, particularly data from a vulnerable population or group to which the researchers do not explicitly belong. Further note that by no means the goal of this research is to claim that our coding is accurate, we only attempt to study whether it is possible to categorize tweets in this way. Particular care was taken to ensure all members of the research team have been extensively trained in undertaking research sensitively, and are aware of relevant ethical issues. 8 Conclusion and Future Work In this work, we proposed a Social Media Language Model, a three part ULMFiT architecture, 2535 Id Tweet L M T1 ”Dude, I can’t play Fortnite! I got raped there, haha meToo” 0 1 T2 ”I was followed and harassed by two guys on my way back home last night.” This is what my friend had to say after spending one day in Baja. 0 1 T3 ”He was my teacher and I was 12. #metoo” 1 0 T4 ”I too am a metoo survivor” 1 0 T5 ”I was walking home and I saw in broad daylight a man walking towards me furiously rubbing his privates looking at me”. 1 0 T6 ”senatorcollins i beg you for my 12 year old daughter who was sexually assaulted by her teacher please do not vote yes on kavanaugh”. 0 1 T7 ”I believe Dr Ford because the same thing happened to me” 1 0 Table 5: Error Analysis for the task of analyzing disclosures of sexual harassment on social media. On a manually annotated real-world dataset, created in two steps to capture a large demographic, our systems could often achieve significant performance improvements over systems that rely on handcrafted and textual features and generic deep learning based systems. An extensive comparison shows the merit of using Medium-Specific Language Models based on an AWD-LSTM architecture, along with an augmented vocabulary which is capable of representing deep linguistic subtleties in text that pose challenges to the complex task of detecting sexual harassment disclosure. We also hope this study enables further research in terms of how people seek support online on sexual harassment and mental health-related problems. Our future agenda includes exploring the applicability of our analysis and system for identifying patterns and potential prevention. We also plan to use this model to solve other downstream medium-specific tasks pertaining to mental health and welfare. References Nazanin Andalibi, Oliver Haimson, Munmun De Choudhury, and Andrea Forte. 2016. Understanding social media disclosures of sexual abuse through the lenses of support seeking and anonymity. pages 3906–3918. Katherine Bogen, Kaitlyn Bleiweiss, and Lindsay M. Orchowski. 2018. Sexual violence is notokay: Social reactions to disclosures of sexual victimization on twitter. Psychology of Violence. Scott E Caplan and Jacob S Turner. 2007. Bringing theory to research on computer-mediated comforting communication. Computers in human behavior, 23(2):985–998. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Eleventh International AAAI Conference on Web and Social Media. Munmun De Choudhury and Sushovan De. 2014. Mental health discourse on reddit: Self-disclosure, social support, and anonymity. In Eighth International AAAI Conference on Weblogs and Social Media. Gunther Eysenbach, John Powell, Marina Englesakis, Carlos Rizo, and Anita Stern. 2004. Health related virtual communities and electronic support groups: systematic review of the effects of online peer to peer interactions. Bmj, 328(7449):1166. Linda K George, Dan G Blazer, Dana C Hughes, and Nancy Fowler. 1989. Social support and the outcome of major depression. The British Journal of Psychiatry, 154(4):478–485. Jeremy Howard and Sebastian Ruder. 2018. Finetuned language models for text classification. arXiv preprint arXiv:1801.06146. Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. arXiv e-prints, page arXiv:1801.06146. Grace J Johnson and Paul J Ambrose. 2006. Neotribes: The power and potential of online communities in health care. Communications of the ACM, 49(1):107–113. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431. Association for Computational Linguistics. 2536 Sweta Karlekar and Mohit Bansal. 2018a. Safecity: Understanding diverse forms of sexual harassment personal stories. arXiv preprint arXiv:1809.04739. Sweta Karlekar and Mohit Bansal. 2018b. Unc chapel hill 1swetakar, mbansall@ cs. unc. edu. Aparup Khatua, Erik Cambria, and Apalak Khatua. 2018. Sounds of silence breakers: Exploring sexual violence on twitter. pages 397–400. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Leslie S Liu, Jina Huh, Tina Neogi, Kori Inkpen, and Wanda Pratt. 2013. Health vlogger-viewer interaction in chronic illness management. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 49–58. ACM. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101. Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. arXiv preprint cs/0205028. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pages 142–150. Association for Computational Linguistics. Lydia Manikonda, Ghazaleh Beigi, Subbarao Kambhampati, and Huan Liu. 2018. metoo Through the Lens of Social Media, pages 104–110. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and Optimizing LSTM Language Models. arXiv e-prints, page arXiv:1708.02182. Megan A Moreno, Lauren A Jelenchick, Katie G Egan, Elizabeth Cox, Henry Young, Kerry E Gannon, and Tara Becker. 2011. Feeling bad on facebook: depression disclosures by college students on a social networking site. Depression and anxiety, 28(6):447–455. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, pages 807–814, USA. Omnipress. Mark W Newman, Debra Lauterbach, Sean A Munson, Paul Resnick, and Margaret E Morris. 2011. It’s not that i don’t have problems, i’m just not putting them on facebook: challenges and opportunities in using online social networks for health. In Proceedings of the ACM 2011 conference on Computer supported cooperative work, pages 341–350. ACM. Hyun Jung Oh, Carolyn Lauckner, Jan Boehmer, Ryan Fewins-Bliss, and Kang Li. 2013. Facebooking for health: An examination into the solicitation and effects of health-related social support on social networking sites. Computers in human behavior, 29(5):2072–2080. Minsu Park, David W McDonald, and Meeyoung Cha. 2013. Perception differences between the depressed and non-depressed users in twitter. In Seventh International AAAI Conference on Weblogs and Social Media. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310–1318. Michael J Paul and Mark Dredze. 2011. You are what you tweet: Analyzing twitter for public health. In Fifth International AAAI Conference on Weblogs and Social Media. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172–180. Association for Computational Linguistics. Ramit Sawhney, Prachi Manchanda, Raj Singh, and Swati Aggarwal. 2018. A computational approach to feature extraction for identification of suicidal ideation in tweets. In Proceedings of ACL 2018, Student Research Workshop, pages 91–98. Nicolas Schrading, Cecilia Alm, Ray Ptucha, and Christopher Homan. 2015. An analysis of domestic abuse discourse on reddit. pages 2577–2583. Elizabeth Sillence, Pam Briggs, Peter Richard Harris, and Lesley Fishwick. 2007. How do patients evaluate and make use of online health information? Social science & medicine, 64(9):1853–1862. Leslie N Smith. 2017. Cyclical learning rates for training neural networks. In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, pages 464–472. IEEE. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. 2537 Christine Wekerle, Negar Vakili, Sherry Stewart, and Tara Black. 2018. The utility of twitter as a tool for increasing reach of research on sexual violence. Child abuse neglect, 85. Marsha White and Steve M Dorman. 2001. Receiving social support online: implications for health education. Health education research, 16(6):693–707. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Hang Yuan, Jin Wang, and Xuejie Zhang. 2018. Ynuhpcc at semeval-2018 task 11: Using an attentionbased cnn-lstm for machine comprehension using commonsense knowledge. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1058–1062. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657.
2019
241
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2538–2549 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2538 Multi-task Pairwise Neural Ranking for Hashtag Segmentation Mounica Maddela1, Wei Xu1, Daniel Preot¸iuc-Pietro2 1 Department of Computer Science and Engineering, The Ohio State University 2 Bloomberg LP {maddela.4, xu.1265}@osu.edu [email protected] Abstract Hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability, aiding search, or providing additional semantics. However, the semantic content of hashtags is not straightforward to infer as these represent ad-hoc conventions which frequently include multiple words joined together and can include abbreviations and unorthodox spellings. We build a dataset of 12,594 hashtags split into individual segments and propose a set of approaches for hashtag segmentation by framing it as a pairwise ranking problem between candidate segmentations.1 Our novel neural approaches demonstrate 24.6% error reduction in hashtag segmentation accuracy compared to the current state-of-the-art method. Finally, we demonstrate that a deeper understanding of hashtag semantics obtained through segmentation is useful for downstream applications such as sentiment analysis, for which we achieved a 2.6% increase in average recall on the SemEval 2017 sentiment analysis dataset. 1 Introduction A hashtag is a keyphrase represented as a sequence of alphanumeric characters plus underscore, preceded by the # symbol. Hashtags play a central role in online communication by providing a tool to categorize the millions of posts generated daily on Twitter, Instagram, etc. They are useful in search, tracking content about a certain topic (Berardi et al., 2011; Ozdikis et al., 2012), or discovering emerging trends (Sampson et al., 2016). Hashtags often carry very important information, such as emotion (Abdul-Mageed and Ungar, 1Our toolkit along with the code and data are publicly available at https://github.com/mounicam/ hashtag_master Type Single-token Multi-token Named-entity (33.0%) #lionhead #toyotaprius Events (14.8%) #oscars #ipv6summit Standard (43.6%) #snowfall #epicfall Non-standard (11.2%) #sayin #iloveu4eva Table 1: Examples of single- (47.1%) and multi-word hashtags (52.9%) and their categorizations based on a sample of our data. 2017), sentiment (Mohammad et al., 2013), sarcasm (Bamman and Smith, 2015), and named entities (Finin et al., 2010; Ritter et al., 2011). However, inferring the semantics of hashtags is nontrivial since many hashtags contain multiple tokens joined together, which frequently leads to multiple potential interpretations (e.g., lion head vs. lionhead). Table 1 shows several examples of single- and multi-token hashtags. While most hashtags represent a mix of standard tokens, named entities and event names are prevalent and pose challenges to both human and automatic comprehension, as these are more likely to be rare tokens. Hashtags also tend to be shorter to allow fast typing, to attract attention or to satisfy length limitations imposed by some social media platforms. Thus, they tend to contain a large number of abbreviations or non-standard spelling variations (e.g., #iloveu4eva) (Han and Baldwin, 2011; Eisenstein, 2013), which hinders their understanding. The goal of our study is to build efficient methods for automatically splitting a hashtag into a meaningful word sequence. Our contributions are: • A larger and better curated dataset for this task; • Framing the problem as pairwise ranking using novel neural approaches, in contrast to previous work which ignored the relative order of candidate segmentations; • A multi-task learning method that uses different sets of features to handle different types of 2539 hashtags; • Experiments demonstrating that hashtag segmentation improves sentiment analysis on a benchmark dataset. Our new dataset includes segmentation for 12,594 unique hashtags and their associated tweets annotated in a multi-step process for higher quality than the previous dataset of 1,108 hashtags (Bansal et al., 2015). We frame the segmentation task as a pairwise ranking problem, given a set of candidate segmentations. We build several neural architectures using this problem formulation which use corpus-based, linguistic and thesaurus based features. We further propose a multi-task learning approach which jointly learns segment ranking and single- vs. multi-token hashtag classification. The latter leads to an error reduction of 24.6% over the current state-of-the-art. Finally, we demonstrate the utility of our method by using hashtag segmentation in the downstream task of sentiment analysis. Feeding the automatically segmented hashtags to a state-of-the-art sentiment analysis method on the SemEval 2017 benchmark dataset results in a 2.6% increase in the official metric for the task. 2 Background and Preliminaries Current approaches for hashtag segmentation can be broadly divided into three categories: (a) gazeteer and rule based (Maynard and Greenwood, 2014; Declerck and Lendvai, 2015; Billal et al., 2016), (b) word boundary detection (C¸ elebi and ¨Ozg¨ur, 2017, 2016), and (c) ranking with language model and other features (Wang et al., 2011; Bansal et al., 2015; Berardi et al., 2011; Reuter et al., 2016; Simeon et al., 2016). Hashtag segmentation approaches draw upon work on compound splitting for languages such as German or Finnish (Koehn and Knight, 2003) and word segmentation (Peng and Schuurmans, 2001) for languages with no spaces between words such as Chinese (Sproat and Shih, 1990; Xue and Shen, 2003). Similar to our work, Bansal et al. (2015) extract an initial set of candidate segmentations using a sliding window, then rerank them using a linear regression model trained on lexical, bigram and other corpus-based features. The current state-ofthe-art approach (C¸ elebi and ¨Ozg¨ur, 2017, 2016) uses maximum entropy and CRF models with a combination of language model and hand-crafted features to predict if each character in the hashtag is the beginning of a new word. Generating Candidate Segmentations. Microsoft Word Breaker (Wang et al., 2011) is, among the existing methods, a strong baseline for hashtag segmentation, as reported in C¸ elebi and ¨Ozg¨ur (2017) and Bansal et al. (2015). It employs a beam search algorithm to extract k best segmentations as ranked by the n-gram language model probability: ScoreLM(s) = n X i=1 log P(wi|wi−N+1 . . . wi−1) where [w1, w2 . . . wn] is the word sequence of segmentation s and N is the window size. More sophisticated ranking strategies, such as Binomial and word length distribution based ranking, did not lead to a further improvement in performance (Wang et al., 2011). The original Word Breaker was designed for segmenting URLs using language models trained on web data. In this paper, we reimplemented2 and tailored this approach to segmenting hashtags by using a language model specifically trained on Twitter data (implementation details in §3.6). The performance of this method itself is competitive with state-of-the-art methods (evaluation results in §5.3). Our proposed pairwise ranking method will effectively take the top k segmentations generated by this baseline as candidates for reranking. However, in prior work, the ranking scores of each segmentation were calculated independently, ignoring the relative order among the top k candidate segmentations. To address this limitation, we utilize a pairwise ranking strategy for the first time for this task and propose neural architectures to model this. 3 Multi-task Pairwise Neural Ranking We propose a multi-task pairwise neural ranking approach to better incorporate and distinguish the relative order between the candidate segmentations of a given hashtag. Our model adapts to address single- and multi-token hashtags differently via a multi-task learning strategy without requiring additional annotations. In this section, we describe the task setup and three variants of pairwise neural ranking models (Figure 1). 2To the best of our knowledge, Microsoft discontinued its Word Breaker and Web Ngram API services in early 2018. 2540 hashtag (h) #songsonghaddafisitunes segmentation (s∗) songs on ghaddafis itunes (i.e. songs on Ghaddafi’s iTunes) candidate segmentations (s ∈S) songs on ghaddafis itunes songs on ghaddafisi tunes songs on ghaddaf is itunes song song haddafis i tunes songsong haddafisitunes (and . . . ) Table 2: Example hashtag along with its gold and possible candidate segmentations. 3.1 Segmentation as Pairwise Ranking The goal of hashtag segmentation is to divide a given hashtag h into a sequence of meaningful words s∗= [w1, w2, . . . , wn]. For a hashtag of r characters, there are a total of 2r−1 possible segmentations but only one, or occasionally two, of them (s∗) are considered correct (Table 2). We transform this task into a pairwise ranking problem: given k candidate segmentations {s1, s2, . . . , sk}, we rank them by comparing each with the rest in a pairwise manner. More specifically, we train a model to predict a real number g(sa, sb) for any two candidate segmentations sa and sb of hashtag h, which indicates sa is a better segmentation than sb if positive, and vice versa. To quantify the quality of a segmentation in training, we define a gold scoring function g∗based on the similarities with the ground-truth segmentation s∗: g∗(sa, sb) = sim(sa, s∗) −sim(sb, s∗). We use the Levenshtein distance (minimum number of single-character edits) in this paper, although it is possible to use other similarity measurements as alternatives. We use the top k segmentations generated by Microsoft Word Breaker (§2) as initial candidates. 3.2 Pairwise Neural Ranking Model For an input candidate segmentation pair ⟨sa, sb⟩, we concatenate their feature vectors sa and sb, and feed them into a feedforward network which emits a comparison score g(sa, sb). The feature vector sa or sb consists of language model probabilities using Good-Turing (Good, 1953) and modified Kneser-Ney smoothing (Kneser and Ney, 1995; Chen and Goodman, 1999), lexical and linguistic features (more details in §3.5). For training, we use all the possible pairs ⟨sa, sb⟩of the k candidates as the input and their gold scores g∗(sa, sb) as the target. The training objective is to minimize the Mean Squared Error (MSE): LMSE = 1 m m X i=1 (g∗(i)(sa, sb) −ˆg(i)(sa, sb))2 (1) where m is the number of training examples. To aggregate the pairwise comparisons, we follow a greedy algorithm proposed by Cohen et al. (1998) and used for preference ranking (Parakhin and Haluptzok, 2009). For each segmentation s in the candidate set S = {s1, s2, . . . , sk}, we calculate a single score ScorePNR(s) = P s̸=sj∈S g(s, sj), and find the segmentation smax corresponding to the highest score. We repeat the same procedure after removing smax from S, and continue until S reduces to an empty set. Figure 1(a) shows the architecture of this model. 3.3 Margin Ranking (MR) Loss As an alternative to the pairwise ranker (§3.2), we propose a pairwise model which learns from candidate pairs ⟨sa, sb⟩but ranks each individual candidate directly rather than relatively. We define a new scoring function g′ which assigns a higher score to the better candidate, i.e., g′(sa) > g′(sb), if sa is a better candidate than sb and vice-versa. Instead of concatenating the features vectors sa and sb, we feed them separately into two identical feedforward networks with shared parameters. During testing, we use only one of the networks to rank the candidates based on the g′ scores. For training, we add a ranking layer on top of the networks to measure the violations in the ranking order and minimize the Margin Ranking Loss (MR): LMR = 1 m m X i=1 max(0, 1 −l(i) ab p(i) ab ) p(i) ab = (ˆg′(i)(sa) −ˆg′(i)(sb)) lab =    1 g∗(sa, sb) > 0 −1 g∗(sa, sb) < 0 0 otherwise (2) where m is the number of training samples. The architecture of this model is presented in Figure 1(b). 3.4 Adaptive Multi-task Learning Both models in §3.2 and §3.3 treat all the hashtags uniformly. However, different features address different types of hashtags. By design, the linguistic features capture named entities and multiword hashtags that exhibit word shape patterns, 2541 (a) Pairwise Ranking Model (MSE §3.2) (b) Margin Ranking Loss w/ shared parameters (MR §3.3) (c) Adaptive Multi-task Learning for Pairwise ranking (MSE+Multitask §3.4) Figure 1: Pairwise neural ranking models for hashtag segmentation. Given two candidate segmentations sa and sb of hashtag h, the goal is to predict the segmentation’s goodness relative score (g) or absolute (g′) score. such as camel case. The ngram probabilities with Good-Turing smoothing gravitate towards multiword segmentations with known words, as its estimate for unseen ngrams depends on the fraction of ngrams seen once which can be very low (Heafield, 2013). The modified Kneser-Ney smoothing is more likely to favor segmentations that contain rare words, and single-word segmentations in particular. Please refer to §5.3 for a more detailed quantitative and qualitative analysis. To leverage this intuition, we introduce a binary classification task to help the model differentiate single-word from multi-word hashtags. The binary classifier takes hashtag features h as the input and outputs wh, which represents the probability of h being a multi-word hashtag. wh is used as an adaptive gating value in our multitask learning setup. The gold labels for this task are obtained at no extra cost by simply verifying whether the ground-truth segmentation has multiple words. We train the pairwise segmentation ranker and the binary single- vs. multi-token hashtag classifier jointly, by minimizing LMSE for the pairwise ranker and the Binary Cross Entropy Error (LBCE) for the classifier: Lmultitask = λ1LMSE + λ2LBCE LBCE = −1 m m X i=1  l(i) ∗log(w(i) h )+ (1 −l(i)) ∗log(1 −w(i) h )  (3) where wh is the adaptive gating value, l ∈{0, 1} indicates if h is actually a multi-word hashtag and m is the number of training examples. λ1 and λ2 are the weights for each loss. For our experiments, we apply equal weights. More specifically, we divide the segmentation feature vector sa into two subsets: (a) sKN a with modified Kneser-Ney smoothing features, and (b) sGL a with Good-Turing smoothing and linguistic features. For an input candidate segmentation pair ⟨sa, sb⟩, we construct two pairwise vectors sKN ab = [sKN a ; sKN b ] and sGL ab = [sGL a ; sGL b ] by concatenation, then combine them based on the adaptive gating value wh before feeding them into the feedforward network G for pairwise ranking: ˆg(sa, sb) = G whsGL ab + (1 −wh)sKN ab  (4) We use summation with padding, as we find this simple ensemble method achieves similar performance in our experiments as the more complex multi-column networks (Ciresan et al., 2012). Figure 1(c) shows the architecture of this model. An analogue multi-task formulation can also be used for the Margin Ranking loss as: Lmultitask = λ1LMR + λ2LBCE. (5) 3.5 Features We use a combination of corpus-based and linguistic features to rank the segmentations. For a candidate segmentation s, its feature vector s includes the number of words in the candidate, the length of each word, the proportion of words in an English dictionary3 or Urban Dictionary4 (Nguyen et al., 2018), ngram counts from Google Web 1TB corpus (Brants and Franz, 2006), and ngram probabilities from trigram language models trained on the Gigaword corpus (Graff and Cieri, 2003) and 3https://pypi.org/project/pyenchant 4https://www.urbandictionary.com 2542 1.1 billion English tweets from 2010, respectively. We train two language models on each corpus: one with Good-Turing smoothing using SRILM (Stolcke, 2002) and the other with modified KneserNey smoothing using KenLM (Heafield, 2011). We also add boolean features, such as if the candidate is a named-entity present in the list of Wikipedia titles, and if the candidate segmentation s and its corresponding hashtag h satisfy certain word-shapes (more details in appendix A.1). Similarly, for hashtag h, we extract the feature vector h consisting of hashtag length, ngram count of the hashtag in Google 1TB corpus (Brants and Franz, 2006), and boolean features indicating if the hashtag is in an English dictionary or Urban Dictionary, is a named-entity, is in camel case, ends with a number, and has all the letters as consonants. We also include features of the bestranked candidate by the Word Breaker model. 3.6 Implementation Details We use the PyTorch framework to implement our multi-task pairwise ranking model. The pairwise ranker consists of an input layer, three hidden layers with eight nodes in each layer and hyperbolic tangent (tanh) activation, and a single linear output node. The auxiliary classifier consists of an input layer, one hidden layer with eight nodes and one output node with sigmoid activation. We use the Adam algorithm (Kingma and Ba, 2014) for optimization and apply a dropout of 0.5 to prevent overfitting. We set the learning rate to 0.01 and 0.05 for the pairwise ranker and auxiliary classifier respectively. For each experiment, we report results obtained after 100 epochs. For the baseline model used to extract the k initial candidates, we reimplementated the Word Breaker (Wang et al., 2011) as described in §2 and adapted it to use a language model trained on 1.1 billion tweets with Good-Turing smoothing using SRILM (Stolcke, 2002) to give a better performance in segmenting hashtags (§5.3). For all our experiments, we set k = 10. 4 Hashtag Segmentation Data We use two datasets for experiments (Table 3): (a) STANsmall, created by Bansal et al. (2015), which consists of 1,108 unique English hashtags from 1,268 randomly selected tweets in the Stanford Sentiment Analysis Dataset (Go and Huang, 2009) along with their crowdsourced segmentations and Data num. of Hashtags avg. avg. (multi-token%) #char #word Train 2518 (51.9%) 8.5 1.8 STANlarge Dev 629 (52.3%) 8.4 1.7 Test 9447 (53.0%) 8.6 1.8 STANsmall Test 1108 (60.5%) 9.0 1.9 Table 3: Statistics of the STANsmall and STANlarge datasets – number of unique hashtags, percentage of multi-token hashtags, average length of hashtags in characters and words. our additional corrections; and (b) STANlarge, our new expert curated dataset, which includes all 12,594 unique English hashtags and their associated tweets from the same Stanford dataset. Dataset Analysis. STANsmall is the most commonly used dataset in previous work. However, after reexamination, we found annotation errors in 6.8%5 of the hashtags in this dataset, which is significant given that the error rate of the state-of-theart models is only around 10%. Most of the errors were related to named entities. For example, #lionhead, which refers to the “Lionhead” video game company, was labeled as “lion head”. Our Dataset. We therefore constructed the STANlarge dataset of 12,594 hashtags with additional quality control for human annotations. We displayed a tweet with one highlighted hashtag on the Figure-Eight6 (previously known as CrowdFlower) crowdsourcing platform and asked two workers to list all the possible segmentations. For quality control on the platform, we displayed a test hashtag in every page along with the other hashtags. If any annotator missed more than 20% of the test hashtags, then they were not allowed to continue work on the task. For 93.1% of the hashtags, the workers agreed on the same segmentation. We further asked three in-house annotators (not authors) to cross-check the crowdsourced annotations using a two-step procedure: first, verify if the hashtag is a named entity based on the context of the tweet; then search on Google to find the correct segmentation(s). We also asked the same annotators to fix the errors in STANsmall. The human upperbound of the task is estimated at ∼98% accuracy, where we consider the crowdsourced segmentations (two workers merged) as correct if at least one of them matches with our expert annotator’s segmentations. 5More specifically, 4.8% hashtags is missing one of the two acceptable segmentations and another 2.0% is incorrect segmentation. 6https://figure-eight.com 2543 All Hashtags Multi-token Single-token A@1 F1@1 A@2 MRR A@1 F1@1 A@2 MRR A@1 A@2 MRR Original hashtag 51.0 51.0 – – 19.1 19.1 – – 100.0 – – Rule-based (Billal et al., 2016) 58.1 63.5 – – 57.6 66.5 – – 58.8 – – GATE Hashtag Tokenizer (M&G, 2014) 73.2 77.2 – – 71.4 78.0 – – 76.0 – – Viterbi (Berardi et al., 2011) 73.4 78.5 – – 74.5 83.1 – – 71.6 – – MaxEnt (C¸ elebi and ¨Ozg¨ur, 2017) 92.4 93.4 – – 91.9 93.6 – – 93.1 – – Word Breaker w/ Twitter LM 90.8 91.7 97.4 94.5 88.5 90.0 97.8 93.7 94.3 96.8 95.7 Pairwise linear ranker 88.1 89.9 97.2 93.1 83.8 86.8 97.3 91.3 94.7 97.0 95.9 Pairwise neural ranker (MR) 92.3 93.5 98.2 95.4 90.9 92.8 99.0 95.2 94.5 96.9 95.8 Pairwise neural ranker (MSE) 92.5 93.7 98.2 95.5 91.2 93.1 99.0 95.4 94.5 97.0 95.8 Pairwise neural ranker (MR+multitask) 93.0 94.3 97.8 95.7 91.5 93.7 98.7 95.4 95.2 96.6 96.0 Pairwise neural ranker (MSE+multitask) 94.5 95.2 98.4 96.6 93.9 95.1 99.4 96.8 95.4 96.8 96.2 Human Upperbound 98.0 98.3 – – 97.8 98.2 – – 98.4 – – Table 4: Evaluation results on the corrected version of STANsmall. For reference, on the original version of STANsmall, the Microsoft Word Breaker API reported an 84.6% F1 score and an 83.6% accuracy for the top one output (C¸ elebi and ¨Ozg¨ur, 2017), while our best model (MSE+multitask) reported 89.8% F1 and 91.0% accuracy. All Hashtags Multi-token Single-token A@1 F1@1 A@2 MRR A@1 F1@1 A@2 MRR A@1 A@2 MRR Original hashtag 55.5 55.5 – – 16.2 16.2 – – 100.0 – – Rule-based (Billal et al., 2016) 56.1 61.5 – – 56.0 65.8 – – 56.3 – – Viterbi (Berardi et al., 2011) 68.4 73.8 – – 71.2 81.5 – – 65.0 – – GATE Hashtag Tokenizer (M&G, 2014) 72.4 76.1 – – 70.0 76.8 – – 75.3 – – MaxEnt (C¸ elebi and ¨Ozg¨ur, 2017) 91.2 92.3 – – 90.2 92.4 – – 92.3 – – Word Breaker w/ Twitter LM 90.1 91.0 96.6 93.9 88.5 90.0 97.0 93.4 91.9 96.2 94.4 Pairwise linear ranker 89.2 91.1 96.3 93.3 84.2 87.8 95.6 91.0 94.8 97.0 95.9 Pairwise neural ranker (MR) 91.3 92.6 97.2 94.6 89.9 92.4 97.5 94.3 92.8 96.8 94.9 Pairwise neural ranker (MSE) 91.3 92.6 97.0 94.5 91.0 93.6 97.7 94.9 91.5 96.2 94.1 Pairwise neural ranker (MR+multitask) 91.4 92.7 97.2 94.6 90.0 92.6 97.7 94.4 92.9 96.6 94.9 Pairwise neural ranker (MSE+multitask) 92.4 93.6 97.3 95.2 91.9 94.1 98.0 95.4 93.0 96.5 94.9 Human Upperbound 98.6 98.8 – – 98.0 98.4 – – 99.2 – – Table 5: Evaluation results on our STANlarge test dataset. For single-token hashtags, the token-level F1@1 is equivalent to segmentation-level A@1. For multi-token cases, A@1 and F1@1 for the original hashtag baseline are non-zero because 11.4% of the hashtags have more than one acceptable segmentations. Our best model (MSE+multitask) shows a statistically significant improvement (p < 0.05) over the state-of-the-art approach (C¸ elebi and ¨Ozg¨ur, 2017) based on the paired bootstrap test (Berg-Kirkpatrick et al., 2012). 5 Experiments In this section, we present experimental results that compare our proposed method with the other state-of-the-art approaches on hashtag segmentation datasets. The next section will show experiments of applying hashtag segmentation to the popular task of sentiment analysis. 5.1 Existing Methods We compare our pairwise neural ranker with the following baseline and state-of-the-art approaches: (a) The original hashtag as a single token; (b) A rule-based segmenter, which employs a set of word-shape rules with an English dictionary (Billal et al., 2016); (c) A Viterbi model which uses word frequencies from a book corpus7 (Berardi et al., 2011); 7Project Gutenberg http://norvig.com/big.txt (d) The specially developed GATE Hashtag Tokenizer from the open source toolkit,8 which combines dictionaries and gazetteers in a Viterbi-like algorithm (Maynard and Greenwood, 2014); (e) A maximum entropy classifier (MaxEnt) trained on the STANlarge training dataset. It predicts whether a space should be inserted at each position in the hashtag and is the current state-of-the-art (C¸ elebi and ¨Ozg¨ur, 2017); (f) Our reimplementation of the Word Breaker algorithm which uses beam search and a Twitter ngram language model (Wang et al., 2011); (g) A pairwise linear ranker which we implemented for comparison purposes with the same features as our neural model, but using perceptron as the underlying classifier (Hopkins and May, 2011) and minimizing the hinge 8https://gate.ac.uk/ 2544 Single Multi All A MRR A MRR A MRR Kneser-Ney 95.4 95.7 56.0 75.3 74.9 85.1 Good-Turing (GT) 91.4 93.5 85.9 91.8 88.6 92.6 Linguistic (Ling) 89.4 91.7 71.6 82.6 80.1 87.0 GT + Ling 92.4 93.9 86.2 92.3 88.9 92.7 All Features 91.1 93.1 89.0 93.7 90.0 93.4 Table 6: Evaluation of automatic hashtag segmentation (MSE) with different features on the STANlarge dev set. A denotes accuracy@1. While Kneser-Ney features perform well on single-token hashtags, GT+Ling features perform better on multi-token hashtags. loss between g∗and a scoring function similar to g′. It is trained on the STANlarge dataset. 5.2 Evaluation Metrics We evaluate the performance by the top k (k = 1, 2) accuracy (A@1, A@2), average token-level F1 score (F1@1), and mean reciprocal rank (MRR). In particular, the accuracy and MRR are calculated at the segmentation-level, which means that an output segmentation is considered correct if and only if it fully matches the human segmentation. Average token-level F1 score accounts for partially correct segmentation in the multi-token hashtag cases. 5.3 Results Tables 4 and 5 show the results on the STANsmall and STANlarge datasets, respectively. All of our pairwise neural rankers are trained on the 2,518 manually segmented hashtags in the training set of STANlarge and perform favorably against other state-of-the-art approaches. Our best model (MSE+multitask) that utilizes different features adaptively via a multi-task learning procedure is shown to perform better than simply combining all the features together (MR and MSE). We highlight the 24.6% error reduction on STANsmall and 16.5% on STANlarge of our approach over the previous SOTA (C¸ elebi and ¨Ozg¨ur, 2017) on the Multi-token hashtags, and the importance of having a separate evaluation of multi-word cases as it is trivial to obtain 100% accuracy for Singletoken hashtags. While our hashtag segmentation model is achieving a very high accuracy@2, to be practically useful, it remains a challenge to get the top one predication exactly correct. Some hashtags are very difficult to interpret, e.g., #BTVSMB refers to the Social Media Breakfast (SMB) in Burlington, Vermont (BTV). The improved Word Breaker with our addition of a Twitter-specific language model is a very strong Kneser-Ney Good-Turing Linguistic count Example Hashtags ◦ ◦ ◦ 31 #omnomnom #BTVSMB • ◦ ◦ 13 #commbank #mamapedia ◦ • ◦ 38 #wewantmcfly #winebarsf ◦ ◦ • 24 #cfp09 #TechLunchSouth • • ◦ 44 #twittographers #bringback • ◦ • 16 #iccw #ecom09 ◦ • • 53 #LetsGoPens #epicwin • • • 420 #prototype #newyork Table 7: Error (◦) and correct (•) segmentation analysis of three pairwise ranking models (MSE) trained with different feature sets Each row corresponds to one area in the Venn diagram; for example, ◦◦◦is the set of hashtags that all three models failed in the STANlarge dev data and •◦◦is the set of hashtags that only the model with Kneser-Ney language model features (but not the other two models) segmented correctly. baseline, which echos the findings of the original Word Breaker paper (Wang et al., 2011) that having a large in-domain language model is extremely helpful for word segmentation tasks. It is worth noting that the other state-of-the-art system (C¸ elebi and ¨Ozg¨ur, 2017) also utilized a 4-gram language model trained on 476 million tweets from 2009. 5.4 Analysis and Discussion Feature Analysis. To empirically illustrate the effectiveness of different features on different types of hashtags, we show the results for models using individual feature sets in pairwise ranking models (MSE) in Table 6. Language models with modified Kneser-Ney smoothing perform best on single-token hashtags, while Good-Turing and Linguistic features work best on multi-token hashtags, confirming our intuition about their usefulness in a multi-task learning approach. Table 7 shows a qualitative analysis with the first column (◦◦◦) indicating which features lead to correct or wrong segmentations, their count in our data and illustrative examples with human segmentation. Length of Hashtags. As expected, longer hashtags with more than three tokens pose greater challenges and the segmentation-level accuracy of our best model (MSE+multitask) drops to 82.1%. For many error cases, our model predicts a close-to-correct segmentation, e.g., #youknowyouupttooearly, #iseelondoniseefrance, which is also reflected by 2545 Type num. of Hashtags single 4426 (47.1%) 2 tokens 3436 (36.2%) 3 tokens 1085 (11.2%) 4 tokens 279 (2.9%) 5+ tokens 221 (2.6%) Figure 2: Token-level F1 scores (MSE+multitask) on hashtags of different lengths in the STANlarge test set. Figure 3: Token-level F1 scores of our pairwise ranker (MSE+multitask) and Word Breaker on the STANlarge test set, using language models trained with varying amounts of data. the higher token-level F1 scores across hashtags with different lengths (Figure 2). Size of the Language Model. Since our approach heavily relies on building a Twitter language model, we experimented with its sizes and show the results in Figure 3. Our approach can perform well even with access to a smaller amount of tweets. The drop in F1 score for our pairwise neural ranker is only 1.4% and 3.9% when using the language models trained on 10% and 1% of the total 1.1 billion tweets, respectively. Time Sensitivity. Language use in Twitter changes with time (Eisenstein, 2013). Our pairwise ranker uses language models trained on the tweets from the year 2010. We tested our approach on a set of 500 random English hashtags posted in tweets from the year 2019 and show the results in Table 8. With a segmentation-level accuracy of 94.6% and average token-level F1 score of 95.6%, our approach performs favorably on 2019 hashtags. A@1 F1@1 MRR Word Breaker w/ Twitter LM 92.1 93.9 94.7 Pairwise neural ranker (MSE+multitask) 94.6 95.6 96.7 Table 8: Evaluation results on 500 random hashtags from the year 2019. 6 Extrinsic Evaluation: Twitter Sentiment Analysis We attempt to demonstrate the effectiveness of our hashtag segmentation system by studying its impact on the task of sentiment analysis in Twitter (Pang et al., 2002; Nakov et al., 2016; Rosenthal et al., 2017). We use our best model (MSE+multitask), under the name HashtagMaster, in the following experiments. 6.1 Experimental Setup We compare the performance of the BiLSTM+Lex (Teng et al., 2016) sentiment analysis model under three configurations: (a) tweets with hashtags removed, (b) tweets with hashtags as single tokens excluding the # symbol, and (c) tweets with hashtags as segmented by our system, HashtagMaster. BiLSTM+Lex is a state-of-the-art open source system for predicting tweet-level sentiment (Tay et al., 2018). It learns a context-sensitive sentiment intensity score by leveraging a Twitterbased sentiment lexicon (Tang et al., 2014). We use the same settings as described by Teng et al. (2016) to train the model. We use the dataset from the Sentiment Analysis in Twitter shared task (subtask A) at SemEval 2017 (Rosenthal et al., 2017). 9 Given a tweet, the goal is to predict whether it expresses POSITIVE, NEGATIVE or NEUTRAL sentiment. The training and development sets consist of 49,669 tweets and we use 40,000 for training and the rest for development. There are a total of 4,840 tweets containing 12,128 hashtags in the SemEval 2017 test set, and our hashtag segmenter ended up splitting 6,975 of those hashtags present in 3,384 tweets. 6.2 Results and Analysis In Table 9, we report the results based on the 3,384 tweets where HashtagMaster predicted a split, as for the rest of tweets in the test set, the hashtag segmenter would neither improve nor worsen the sentiment prediction. Our hashtag segmenter successfully improved the sentiment analysis performance by 2% on average recall and FPN 1 comparing to having hashtags unsegmented. This improvement is seemingly small but decidedly important for tweets where sentiment-related information is embedded in multi-word hashtags 9We did not use the Stanford Sentiment Analysis Dataset (Go and Huang, 2009), which was used to construct the STANsmall and STANlarge hashtag datasets, because of its noisy sentiment labels obtained using distant supervision. 2546 AvgR FP N 1 Acc Original tweets 61.7 60.0 58.7 −No Hashtags 60.2 58.8 54.2 + Single-word 62.3 60.3 58.6 + HashtagMaster 64.3 62.4 58.6 Table 9: Sentiment analysis evaluation on the 3384 tweets from SemEval 2017 test set using the BiLSTM+Lex method (Tang et al., 2014). Average recall (AvgR) is the official metric of the SemEval task and is more reliable than accuracy (Acc). FP N 1 is the average F1 of positive and negative classes. Having the hashtags segmented by our system HashtagMaster (i.e., MSE+multitask) significantly improves the sentiment prediction than not (p < 0.05 for AvgR and FP N 1 against the single-word setup). and sentiment prediction would be incorrect based only on the text (see Table 10 for examples). In fact, 2,605 out of the 3,384 tweets have multiword hashtags that contain words in the Twitterbased sentiment lexicon (Tang et al., 2014) and 125 tweets contain sentiment words only in the hashtags but not in the rest of the tweet. 7 Other Related Work Automatic hashtag segmentation can improve the performance of many applications besides sentiment analysis, such as text classification (Billal et al., 2016), named entity linking (Bansal et al., 2015) and modeling user interests for recommendations (Chen et al., 2016). It can also help in collecting data of higher volume and quality by providing a more nuanced interpretation of its content, as shown for emotion analysis (Qadir and Riloff, 2014), sarcasm and irony detection (Maynard and Greenwood, 2014; Huang et al., 2018). Better semantic analysis of hashtags can also potentially be applied to hashtag annotation (Wang et al., 2019), to improve distant supervision labels in training classifiers for tasks such as sarcasm (Bamman and Smith, 2015), sentiment (Mohammad et al., 2013), emotions (Abdul-Mageed and Ungar, 2017); and, more generally, as labels for pre-training representations of words (Weston et al., 2014), sentences (Dhingra et al., 2016), and images (Mahajan et al., 2018). 8 Conclusion We proposed a new pairwise neural ranking model for hashtag segmention and showed significant performance improvements over the state-of-theart. We also constructed a larger and more curated dataset for analyzing and benchmarking Ofcourse #clownshoes #altright #IllinoisNazis #FinallyAtpeaceWith people calling me “Kim Fatty the Third” Leslie Odom Jr. sang that. #ThankYouObama After some 4 months of vegetarianism .. it’s all the same industry. #cutoutthecrap Table 10: Sentiment analysis examples where our HashtagMaster segmentation tool helped. Red and blue words are negative and positive entries in the Twitter sentiment lexicon (Tang et al., 2014), respectively. hashtag segmentation methods. We demonstrated that hashtag segmentation helps with downstream tasks such as sentiment analysis. Although we focused on English hashtags, our pairwise ranking approach is language-independent and we intend to extend our toolkit to languages other than English as future work. Acknowledgments We thank Ohio Supercomputer Center (Center, 2012) for computing resources and the NVIDIA for providing GPU hardware. We thank Alan Ritter, Quanze Chen, Wang Ling, Pravar Mahajan, and Dushyanta Dhyani for valuable discussions. We also thank the annotators: Sarah Flanagan, Kaushik Mani, and Aswathnarayan Radhakrishnan. This material is based in part on research sponsored by the NSF under grants IIS-1822754 and IIS-1755898, DARPA through the ARO under agreement number W911NF-17-C-0095, through a Figure-Eight (CrowdFlower) AI for Everyone Award and a Criteo Faculty Research Award to Wei Xu. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of the U.S. Government. References Muhammad Abdul-Mageed and Lyle Ungar. 2017. Emonet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL, pages 718–728. David Bamman and Noah A Smith. 2015. Contextualized Sarcasm Detection on Twitter. In Ninth International AAAI Conference on Web and Social Media, ICWSM, pages 574–577. Piyush Bansal, Romil Bansal, and Vasudeva Varma. 2015. Towards Deep Semantic Analysis of Hashtags. In Proceedings of the 37th European Conference on Information Retrieval, ECIR, pages 453–464. 2547 Giacomo Berardi, Andrea Esuli, Diego Marcheggiani, and Fabrizio Sebastiani. 2011. ISTI@TREC Microblog Track 2011: Exploring the Use of Hashtag Segmentation and Text Quality Ranking. In Text REtrieval Conference (TREC). Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An Empirical Investigation of Statistical Significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL, pages 995– 1005. Belainine Billal, Alexsandro Fonseca, and Fatiha Sadat. 2016. Named Entity Recognition and Hashtag Decomposition to Improve the Classification of Tweets. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), COLING, pages 102–111. Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. Linguistic Data Consortium (LDC). Arda C¸ elebi and Arzucan ¨Ozg¨ur. 2016. Segmenting Hashtags using Automatically Created Training Data. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, LREC, pages 2981–2985. Arda C¸ elebi and Arzucan ¨Ozg¨ur. 2017. Segmenting Hashtags and Analyzing Their Grammatical Structure. Journal of Association For Information Science and Technology (JASIST), 69(5):675–686. Ohio Supercomputer Center. 2012. Oakley supercomputer. http://osc.edu/ark:/19495/ hpc0cvqn. Stanley F Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359–394. Tao Chen, Xiangnan He, and Min-Yen Kan. 2016. Context-aware Image Tweet Modelling and Recommendation. In Proceedings of the 24th ACM International Conference on Multimedia, MM, pages 1018– 1027. Dan Ciresan, Ueli Meier, and J¨urgen Schmidhuber. 2012. Multi-column Deep Neural Networks for Image Classification. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pages 3642–3649. William W Cohen, Robert E Schapire, and Yoram Singer. 1998. Learning to Order Things. In Advances in Neural Information Processing Systems, NIPS, pages 451–457. Thierry Declerck and Piroska Lendvai. 2015. Processing and normalizing hashtags. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP, pages 104–109. Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William Cohen. 2016. Tweet2Vec: Character-Based Distributed Representations for Social Media. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL, pages 269–274. Jacob Eisenstein. 2013. What to do about bad language on the Internet. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL, pages 359–369. Tim Finin, Will Murnane, Anand Karandikar, Nicholas Keller, Justin Martineau, and Mark Dredze. 2010. Annotating named entities in Twitter data with crowdsourcing. In Proceedings of the Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, NAACL, pages 80–88. Bhayani R. Go, A. and L. Huang. 2009. Twitter Sentiment Classification using Distant Supervision. CS224N Project Report, Stanford. Irving J Good. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3-4):237–264. David Graff and Christopher Cieri. 2003. English Gigaword LDC2003T05. Linguistic Data Consortium (LDC). Bo Han and Timothy Baldwin. 2011. Lexical Normalisation of Short Text Messages: Makn Sens a# twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, ACL, pages 368–378. Kenneth Heafield. 2011. KenLM: Faster and Smaller Language Model Queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT, pages 187–197. Kenneth Heafield. 2013. Efficient Language Modeling Algorithms with Applications to Statistical Machine Translation. Ph.D. thesis, Carnegie Mellon University. Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP. Hen-Hsen Huang, Chiao-Chen Chen, and Hsin-Hsi Chen. 2018. Disambiguating false-alarm hashtag usages in tweets for irony detection. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL, pages 771–777. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference for Learning Representations, ICLR. 2548 Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the 1995 International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 181–184. Philipp Koehn and Kevin Knight. 2003. Empirical methods for compound splitting. In Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics, EACL, pages 187–194. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. 2018. Exploring the Limits of Weakly Supervised Pretraining. In Tech Report. Diana Maynard and Mark A Greenwood. 2014. Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC, pages 4238–4243. Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the state-of-theart in sentiment analysis of tweets. In Proceedings of the Seventh International Workshop on Semantic Evaluation, SemEval, pages 321–327. Preslav Nakov, Sara Rosenthal, Svetlana Kiritchenko, Saif M. Mohammad, Zornitsa Kozareva, Alan Ritter, Veselin Stoyanov, and Xiaodan Zhu. 2016. Developing a successful SemEval task in sentiment analysis of Twitter and other social media texts. Language Resources and Evaluation, 50(1):35–65. Dong Nguyen, Barbara McGillivray, and Taha Yasseri. 2018. Emo, love and god: making sense of urban dictionary, a crowd-sourced online dictionary. Royal Society Open Science, 5(5):172320. Ozer Ozdikis, Pinar Senkul, and Halit Oguztuzun. 2012. Semantic Expansion of Hashtags for Enhanced Event Detection in Twitter. In Proceedings of the 1st international Workshop on Online Social Systems. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment Classification using Machine Learning Techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 79–86. M. Parakhin and P. Haluptzok. 2009. Finding the Most Probable Rranking of Objects with Probabilistic Pairwise Preferences. In Proceedings of the 10th International Conference on Document Analysis and Recognition, ICDAR, pages 616–620. Fuchun Peng and Dale Schuurmans. 2001. A hierarchical em approach to word segmentation. In NLPRS, pages 475–480. Ashequl Qadir and Ellen Riloff. 2014. Learning emotion indicators from tweets: Hashtags, hashtag patterns, and phrases. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1203–1209. Jack Reuter, Jhonata Pereira-Martins, and Jugal Kalita. 2016. Segmenting twitter hashtags. International Journal on Natural Language Computing, 5:23–36. Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named Entity Recognition in Tweets: An Experimental Study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1524–1534. Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment Analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval, pages 502–518. Justin Sampson, Fred Morstatter, Liang Wu, and Huan Liu. 2016. Leveraging the implicit structure within social media for emergent rumor detection. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM, pages 2377–2382. C. Simeon, H. J. Hamilton, and R. J. Hilderman. 2016. Word segmentation algorithms with lexical resources for hashtag classification. In Proceedings of the 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 743–751. Richard Sproat and Chilin Shih. 1990. A statistical method for finding word boundaries in chinese text. Computer Processing of Chinese and Oriental Languages, 4(4):336–351. Andreas Stolcke. 2002. SRILM – An Extensible Language Modeling Toolkit. In Proceedings of the 7th International Conference on Spoken Language Processing, ICSLP, pages 901–904. Duyu Tang, Furu Wei, Bing Qin, Ming Zhou, and Ting Liu. 2014. Building Large-Scale Twitter-Specific Sentiment Lexicon : A Representation Learning Approach. In Proceedings of the 25th International Conference on Computational Linguistics, COLING, pages 172–182. Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018. Attentive gated lexicon reader with contrastive contextual co-attention for sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 3443–3453. Zhiyang Teng, Duy Tin Vo, and Yue Zhang. 2016. Context-Sensitive Lexicon Features for Neural Sentiment Analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1629–1638. 2549 Kuansan Wang, Christopher Thrasher, and BoJune Paul Hsu. 2011. Web Scale NLP: A Case Study on URL Word Breaking. In Proceedings of the 20th International Conference on World Wide Web, WWW, pages 357–366. Yue Wang, Jing Li, Irwin King, Michael R. Lyu, and Shuming Shi. 2019. Microblog Hashtag Generation via Encoding Conversation Contexts. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Jason Weston, Sumit Chopra, and Keith Adams. 2014. # tagspace: Semantic embeddings from hashtags. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1822–1827. Nianwen Xue and Libin Shen. 2003. Chinese word segmentation as LMR tagging. In Proceedings of the second SIGHAN workshop on Chinese Language Processing, SIGHAN, pages 176–179. A Appendix A.1 Word-shape rules Our model uses the following word shape rules as boolean features. If the candidate segmentation s and its corresponding hashtag h satisfies a word shape rule, then the boolean feature is set to True. Rule Hashtag →Segmentation Camel Case XxxXxx →Xxx+Xxx Consonants cccc →cccc Digits as prefix ddwwww →dd+wwww Digits as suffix wwwwdd →wwww+dd Underscore www www →www + + www Table 11: Word-shape rule features used to identify good segmentations. Here, X and x represent capitalized and non-capitalized alphabetic characters respectively, c denotes consonant, d denotes number and w denotes any alphabet or number.
2019
242
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2550–2560 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2550 Entity-Centric Contextual Affective Analysis Anjalie Field Carnegie Mellon University [email protected] Yulia Tsvetkov Carnegie Mellon University [email protected] Abstract While contextualized word representations have improved state-of-the-art benchmarks in many NLP tasks, their potential usefulness for social-oriented tasks remains largely unexplored. We show how contextualized word embeddings can be used to capture affect dimensions in portrayals of people. We evaluate our methodology quantitatively, on held-out affect lexicons, and qualitatively, through case examples. We find that contextualized word representations do encode meaningful affect information, but they are heavily biased towards their training data, which limits their usefulness to in-domain analyses. We ultimately use our method to examine differences in portrayals of men and women. 1 Introduction Pre-trained contextualized word embeddings (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2018) have become increasingly common in natural language processing (NLP), improving stateof-the-art results in many standard NLP tasks. However, beyond standard tasks, NLP tools are also vital to more open-ended exploratory tasks, particularly in social science. How these types of tasks can benefit from pre-trained contextualized embeddings has not yet been explored. In this work, we show how to leverage these embeddings to conduct entity-centric analyses, which broadly seek to address how entities are portrayed in narrative text (Bamman et al., 2013; Card et al., 2016). For instance, in the sentence “Batman apprehends the Joker”, a reader might infer that Batman is good, the Joker is evil, and Batman is more powerful than the Joker. Analyzing how people are portrayed in narratives is a key starting point to identifying stereotypes and bias (Joseph et al., 2017; Fast et al., 2016; Field et al., 2019). Existing methods for analyzing people portrayals take either an unsupervised approach (Bamman et al., 2013), which requires large amounts of data and can be difficult to interpret, or rely on domain-specific knowledge (Fast et al., 2016; Wagner et al., 2015), which does not generalize well to other hypotheses and data domains. Furthermore, most models are limited to discrete word-level features, whereas continuous-valued embeddings are typically more expressive. We introduce a novel approach to analyzing entities that maps contextualized embeddings to interpretable dimensions. Specifically, we propose using pre-trained embeddings to extract affect information about target entities. Social psychology research has identified 3 primary affect dimensions: Potency (strength/weakness of an identity), Valence (goodness/badness of an identity), and Activity (activeness/passiveness of an identity) (Osgood et al., 1957; Russell, 1980, 2003). We refer to these dimensions as power, sentiment, and agency for consistency with prior work in NLP (Sap et al., 2017; Rashkin et al., 2016; Field et al., 2019). Thus, in the previous example, “Batman apprehends the Joker”, we might associate Batman with high power, high sentiment, and high agency. While much literature in NLP has examined sentiment, analyses of power have largely been limited to a dialog setting (Prabhakaran, 2015), and almost no work has examined agency. We propose that mapping entities into these 3 dimensions provides a framework for examining narratives that is more holistic than sentiment analyses and more generalizable than task-specific frameworks. The idea that these 3 dimensions are sufficient for capturing affect has also formed the basis of social psychological models (Heise, 2007; Alhothali and Hoey, 2015). Drawing from this theory, we combine con2551 textualized word embeddings with affect lexicons (Mohammad, 2018) to obtain power, sentiment, and agency scores for entities in narrative text. After describing our methodology (§2), we evaluate how well these contextualized embeddings capture affect information on held-out lexicons (§4.1). We then evaluate how well our method scores entities on manually curated benchmarks (§4.2) and through qualitative examples (§4.3). Finally, we use our method to examine different portrayals of men and women (§5), focusing on the same domains as prior work (Wagner et al., 2015; Fu et al., 2016). Ultimately, our work suggests that contexualized embeddings have the potential to improve analyses of entity portrayals. However, we find that these representations are biased towards portrayals in the training data, which limits their usefulness to analyzing in-domain data. Our contributions in this work include: (1) a novel method for analyzing entities in a narrative that is both interpretable and generalizable, (2) an assessment of how well contextualized word embeddings capture affect information, and (3) an analysis of entity portrayals in various domains. 2 Methodology Given an entity, such as “Batman”, mentioned in a narrative, our goal is to obtain power, sentiment, and agency scores for the entity. We take two approaches: supervised regression and semisupervised embedding projection. For both approaches, we use pre-trained contextualized embeddings as features and we use the NRC Valence, Arousal, and Dominance (VAD) Lexicon as training and test data (Mohammad, 2018). While we use this lexicon because its annotations contain our target dimensions of power, sentiment, and agency, our methodology readily generalizes to other lexicons. 2.1 Regression Model In the regression model, we take a supervised approach, using annotations from the NRC VAD Lexicon as training data. Given a training word w and a large training corpus, we extract a contextual embedding e for every instance of w in the corpus. We use off-the-shelf pre-trained language models to extract sentence-level embeddings with no additional fine-tuning. Then, we average over all e embeddings for each instance w to obtain a sinLow High timid resourceful weakly powerfully Power cowardly courageous inferior superior clumsy skillful negative positive pessimistic optimistic Sentiment annoyed amused pessimism optimism disappointed pleased silently furiously meek lusty Agency homely sexy bored flustered quietly frantically Table 1: Polar-opposite word pairs identified by ASP gle feature vector for each training point. We then train a Kernel Ridge Regression model using these embeddings as features.1 To extract affect scores for an entity in a narrative, we use the same pre-trained language model to extract a contextual embedding for the entity. Then, we feed this embedding through the regression model to obtain power, sentiment, and agency scores. When an entity occurs multiple times in the narrative, we average over the contextual embeddings for each occurrence of the entity and score the averaged embedding. 2.2 Affect Subspace Projection (ASP) The main disadvantage of the regression approach is that we are unable to control for confounds and prevent overfitting to the training data. For example, many low-agency nouns tend to be inanimate objects (i.e. table), while high-agency nouns are people-oriented words (i.e. dictator). Thus, we can expect that the model learns to predict the difference between classes of nouns, rather than solely learning the affect dimension of interest. While other variations of regression allow for the inclusion of covariates and confounds, we have no systematic way to quantify or even identify these confounds. Instead, we devise a method to isolate dimensions of power, agency, and sentiment by first identifying corresponding subspaces in the embedding space and then projecting entities onto 1We also experimented with Linear Regression and Ridge Regression, but found that Kernel Ridge Regression performed the best. 2552 these dimensions. We refer to this method as affect subspace projection (ASP). We describe this process for obtaining power scores; the agency and sentiment dimensions are analogous. In order to isolate the power subspace, we draw inspiration from (Bolukbasi et al., 2016). First, we need to identify pairs of words whose meanings differ only in that one word connotes high power and the second word connotes low power. We define a set H, which consists of the |H| highest-powered words from the VAD lexicon and a set L, which consists of the |L| lowest powered words from the VAD Lexicon. For every word wh ∈H, we use cosine similarity over contextual embedding representations to identify wl ∈L, the low-powered word that is most similar to wh. We allow each wl to match to at most one wh. Thus, we identify pairs of words (wh, wl), where wh and wl are very similar words but with polar opposite power scores. Finally, we keep only the N pairs with the greatest cosine similarity. We tune hyperparameters |H|, |L|, and N over a validation set. We show examples of extracted pairs for each dimension in Table 1. Next, we use these paired words to construct a set of vectors whose direction of greatest variance is along the power subspace. For each pair of high and low power words (wh, wl), we take their embedding representations eh and el in the same way as in the regression model. We then define µ = (eh + el)/2, and construct a matrix M, where each row is el −µ or eh −µ. Thus, M is a d×2N dimensional matrix, where d is the dimension of the embeddings. We then run PCA over M to extract its principle components. For all 3 affect dimensions, the first principle component captures the highest percentage of variance (Appendix A), followed by a sharp drop off. Thus, we keep the first principle component as the target subspace. Finally, to score an entity in a narrative, we take the entity’s contextual embedding representation and project it onto the identified subspace. Because we keep only the first principle component as the target subspace, the projection results in a single-dimensional vector, i.e., a power score. We repeat the process for agency and sentiment, constructing 3 separate M matrices in order to obtain power, sentiment, and agency scores. 3 Experimental Setup The NRC VAD Lexicon contains valence (sentiment), arousal (agency), and dominance (power) annotations for more than 20,000 English words. It was created through manual annotations using Best–Worst scaling. The final annotations are on a scale from 0 (i.e. lower power) to 1 (i.e. high power) (Mohammad, 2018). We randomly divide the lexicon into training (16,007), dev (2,000), and test (2,000) sets. We extract embeddings to train our models from a corpus of 42,306 Wikipedia movie plot summaries (Bamman et al., 2013).2 We use two pretrained language models to extract embeddings: ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019). It is important to note that the movie plots corpus we used for extraction is distinct from the corpora used to train ELMo (5.5B tokens from Wikipedia and WMT news crawl) and BERT (800M-word BooksCorpus and 2,500Mword Wikipedia). We use two variants of BERT to extract embeddings. In the first, referred to as “BERT-masked”, we mask out the target word before extracting embeddings from an input sentence. Masking out target words is a part of the BERT training objective (Devlin et al., 2019). By using masks in our embedding extractions, we force the model to produce an embedding solely from the context surrounding the word, rather than relying on information from the word itself. In the second variant, referred to as “BERT”, we extract embeddings over each sentence containing a target without modification. We report further details including hyperparamter settings in Appendix B. 4 Results and Analysis 4.1 Lexicon Correlations Table 2 shows the Pearson correlations between gold annotations and the scores predicted by our models over the held-out VAD test set. The high correlations demonstrate that both the regression and ASP models successfully capture information about power, sentiment, and agency from contextualized embeddings. The ELMo embeddings and unmasked BERT embeddings perform approximately the same. However, the masked BERT 2When experimenting with other training corpora, such as newspaper articles, we found the choice of training corpus had little impact on results. 2553 Regression Power Sentiment Agency ELMo 0.78 0.84 0.76 BERT 0.79 0.83 0.78 BERT-masked 0.64 0.70 0.62 ASP Power Sentiment Agency ELMo 0.65 0.76 0.63 BERT 0.65 0.71 0.66 BERT-masked 0.41 0.47 0.41 Table 2: Pearson correlations between gold NRC VAD labels and scores predicted by our models. Correlations are generally high, with the regression method outperforming ASP. All correlations are statistically significant (p < 1e −75). embeddings perform markedly worse than the unmasked embeddings.3 The poorer performance of the masked embeddings demonstrates the extent to which the BERT model biases representations towards the actual observed word, which is explicitly one of the motivations of the BERT training objective (Devlin et al., 2019). More specifically, when we mask out the target before extracting embeddings, we force the extracted embedding to only encode information from the surrounding context. Then any improvements in performance when we do not mask out the target are presumably obtained from the word-form for the target itself. For example, we may score “king” as high-powered because “king” often occurred as a high-powered entity in the data used to train the BERT model, regardless of whether or not it appeared to be high-powered in the corpus we ultimately extract embeddings from. Nevertheless, training with BERT-masked embeddings still results in statistically significant correlations, which suggests that some affect information is derived from surrounding context. The regression model generally outperforms ASP on this task. The regression model has an advantage over ASP in that it is directly trained over 3One of the drawbacks of context-based word embeddings is that antonyms like “positive” and “negative” tend to have similar embeddings, because they tend to be used in similar contexts. However, given the breadth of words in the VAD lexicon, we do expect context to differ for oppositely scored words. For instance we would expect “pauper” and “king” to be used in different contexts, as well as “pauper” and “powerful”. Regression ASP ELMo 0.51 0.21 BERT 0.38 0.38 BERT-masked 0.17 -0.085 ELMo + Freq 0.65 0.48 Frequency Baseline 0.61 Field et al. (2019) -0.12 Table 3: Spearman correlations between automatically induced power scores and Forbes power ranking. Correlations for ELMo regression (p = 0.029), ELMo regression + Freq (p = 0.003), and the frequency baseline (p = 0.007) are statistically significant. The ELMo regression + Freq model performs the best. the full lexicon, whereas ASP chooses a subset of extreme words to guide the model. However, as discussed in §2, it is difficult to determine what effect other confounds have on the regression model, while the ASP approach provides more concrete evidence that these contextualized word embeddings encode affect information. 4.2 Quantitative Analysis of Entity Scores Next, we evaluate how well our models capture affect information in entities, rather than words, by assessing power scores through two metrics. We compare our models against the entity-scoring metric proposed by Field et al. (2019) and against a frequency baseline, where we consider an entity’s power score to be the number of times the entity is mentioned in the text. First, we consider an in-domain task, where we compare our metrics for scoring power with a standard benchmark that we expect to be reflected in both the data we use to extract embeddings and the data used to train ELMo and BERT. More specifically, we use the power scores obtained from our model to rank the 20 most powerful people in 2016 according to Forbes Magazine.4 This is a particularly difficult task: unlike prior work, which seeks to identify the most powerful people in a corpus (Field et al., 2019), we seek to rank these people according to their power, which requires more precise scores. Furthermore, the frequency metric supplies a particularly strong baseline. The metrics that Forbes Magazine uses to compose the list of powerful people include a person’s influence as well as how actively they 4http://bit.ly/2W5Jvnf 2554 use their power.5 Under these conditions, Forbes Magazine may consider a person to be powerful simply because they are mentioned frequently in the media. Additionally, we can surmise that people who actively use their power are mentioned frequently in the media. Table 3 presents Spearman correlations between our scores and rank on the Forbes list for each model. For all metrics, we construct embeddings from every instance of each person’s full name in U.S. articles from 2016 in the NOW news corpus.6 In addition to the proposed methods, we used our best performing model (regression with ELMo) to augment the frequency baseline, by normalizing and summing the frequency scores with the scores from this model. This combined model achieves the strongest correlation (raw scores from this model are shown in Figure 5). Furthermore, the regression with ELMo model alone achieves a statistically significant correlation even without the incorporation of frequency scores. The unmasked BERT embeddings also achieve positively correlated scores, though these correlations are not statistically significant. The BERT-masked embeddings perform particularly poorly, as does the method for scoring power proposed in Field et al. (2019). While Field et al. (2019) may be capable of identifying powerful entities, we suspect it is not fine-grained enough to rank them. While frequency serves as a strong baseline for power, we would not expect frequency to be a good measure of sentiment or agency. None of our metrics for these traits are significantly correlated with the Forbes’ ranking. Also, we would not expect frequency to be a good measure in other contexts, such as how powerfully an entity is portrayed in a single document rather than across a large media corpus. Next, we further explore performance on an outof-domain task: specifically how powerfully entities are portrayed in a specific set of articles, which we do not expect to align with portrayals in the data used to train ELMo and BERT. For this task, we use the same evaluation metrics as Field et al. (2019); we compare our scores with hand-annotated power rankings over a set of newspaper articles related to a specific event in the #MeToo movement, namely allegations of sexual harassment against the comedian Aziz Ansari. 5http://bit.ly/2Mp2R70 6https://corpus.byu.edu/now/ Full annotation set (383 pairs) Regression ASP ELMo 44.9 43.6 BERT 41.8 49.3 BERT-masked 49.6 59.0 Frequency Baseline 58.0 Reduced annotation set (49 pairs) Regression ASP ELMo 36.7 42.8 BERT 42.9 49.0 BERT-masked 53.1 55.1 Frequency Baseline 57.1 Field et al. (2019) 71.4 Table 4: Accuracy for scoring how powerful entities are as compared with annotations over articles related to the #MeToo movement. Our metrics do not consistently outperform the baselines, suggesting ELMo and BERT embeddings fail to transfer across domains. Following Field et al. (2019), we interpret the hand-annotations, in which human annotators rank entities according to how powerful they seem, as a pairwise task (is entity A more powerful than entity B?) and compute accuracy over pairs of entities. We discard annotations where annotators strongly disagreed about the power of the entity (i.e. annotations differ by more than 2 ranks). Field et al. (2019) compare results with off-theshelf connotation frame lexicons, which restricts analysis to a limited set of pairs, since only entities used with verbs from the lexicon are included. In contrast, we simply use string matching to identify entities in the text, without requiring that the entities be linked to specific verbs, allowing for the identification of more entities. Table 4 shows results over the same set of pairs used for evaluation in Field et al. (2019) as well as an expanded set, when we do not restrict to entities used with lexicon verbs. Our metrics fail to consistently outperform even the frequency baseline for this task, likely because the ELMo and BERT embeddings are biased towards their training data. The #MeToo movement is widely known for subverting traditional power roles: allegations made by traditionally unpowerful women have brought down traditionally powerful men. For example, Harvey Weinstein, an influential film producer, has traditionally been a powerful figure in society, but numerous allegations of sexual harass2555 Power Score weakly Rachel Dent Gordan Batman Joker powerfully Sentiment Score negative Joker Dent Gordan Rachel Batman positive Agency Score dull Dent Gordan Rachel Batman Joker scary Figure 1: Power, sentiment, and agency scores for characters in The Dark Night as learned through the regression model with ELMo embeddings. Scores generally align with character archetypes, i.e. the antagonist has the lowest sentiment score. ment have resulted in his effective removal from the industry. While articles about the #MeToo movement portray men like Weinstein as unpowerful, we can speculate that the corpora used to train ELMo and BERT portray them as powerful. Thus, in a corpus where traditional power roles have been inverted, the embeddings extracted from ELMo and BERT perform worse than random, as they are biased towards the power structures in the data they are trained on. Further evidence of this exists in the performance of the BERT-masked embeddings - whereas these embeddings generally capture power poorly as compared to the unmasked embeddings (Table 2), they outperform the unmasked embeddings on this task, and even outperform the frequency baseline in one setting. Nevertheless, they do not outperform Field et al. (2019), likely because they do not capture affect information as well as the unmasked embeddings (Table 2). 4.3 Qualitative Document-level Analysis Finally, we qualitatively analyze how well our method captures affect dimensions by analyzing single documents in detail. We conduct this analysis in a domain where we expect entities to fulfill traditional power roles and where entity portrayals are known. Following Bamman et al. (2013), we analyze the Wikipedia plot summary of the movie The Dark Knight,7 focusing on Batman (protagonist),8 the Joker (antagonist), Jim Gordan (law enforcement officer, ally to Batman), Har7http://bit.ly/2XmhRDR 8We consider Batman/Bruce Wayne to be the same entity. Power Score weakly Rachel Joker Dent Gordan Batmanpowerfully Sentiment Score negative Joker Gordan Batman Dent Rachel positive Agency Score dull Rachel Dent GordanBatman Joker scary Figure 2: Power, sentiment, and agency scores for characters in The Dark Night as learned through ASP with ELMo embeddings. These scores reflect the same patterns as the regression model with greater separation between characters. vey Dent (ally to Batman who turns evil) and Rachel Dawes (primary love interest). To facilitate extracting example sentences, we score each instance of these entities in the narrative separately and average across instances to obtain an entity score for the document.9 To maximize our data by capturing every mention of an entity, we perform co-reference resolution by hand. Additionally, based on our results from Table 3 as well as the use of Wikipedia data in training the ELMo model (Peters et al., 2018), we use ELMo embeddings for our analysis. Figures 1 and 2 show results. For reference, we show the entity scores as compared to one polar opposite pair identified by ASP. Both the regression model and ASP show similar patterns. Batman has high power, while Rachel has low power. Additionally, the Joker is associated with the most negative sentiment, but the highest agency. Throughout the plot summary, the movie progresses by the Joker taking an aggressive action and the other characters responding. We can see this dynamic reflected in the Joker’s profile score, as a high-powered, high-agency, low-sentiment character, who is the primary plotdriver. In general, ASP shows a greater separation between characters than the regression model. We hypothesize that this occurs because ASP isolates the dimensions of interest, while the regression approach captures other confounds, such as that hu9When we used this averaging metric in other evaluations, we found no significant change in results. Thus, in other scenarios, we compute scores over averaged embeddings, rather than averaging scores separately computed for each embedding to reduce computationally complexity. 2556 Power Score weakly Rachel Batman Joker powerfully Marion Jones Belloq Figure 3: Power scores for characters in Raiders of the Lost Ark and The Dark Night as learned through the regression model with ELMo embeddings. Female characters have lower power scores than male characters. Sentiment Score negative Jones Marion Belloq positive Figure 4: Sentiment scores for characters in Raiders of the Lost Ark as learned through the regression model with ELMo embeddings. The antagonist is scored surprisingly positively. mans tend to be high agency entities. Furthermore, because we score each instance separately, we can pinpoint particularly representative sentences. The sentence indicating the most positive sentiment for Batman is also the sentence that indicates the lowest sentiment for the Joker: “Both the civilians and the prisoners refuse to kill each other, while Batman apprehends the Joker after a brief fight.” An example sentence where the Joker is scored with particularly high power is: “After announcing that Gotham City will be subject to his rule by nightfall, the Joker rigs two evacuating ferries with explosives.” In contrast, a moment where Rachel is portrayed as particularly low-powered is: “Both buildings explode, killing Rachel and disfiguring half of Dent’s face.” One of the advantages of the persona model in Bamman et al. (2013) is the ability to cluster characters across stories, identifying roles like hero and villain more generally. We can similarly use our model to analyze characters across story lines. We show results using the regression model; the ASP results (omitted) reveal the same patterns. In Figure 3, we compare characters from the plot summary of Raiders of the Lost Ark to the characters of The Dark Night, specifically Indiana Jones (protagonist), Rene Belloq (antagonist) and Marion Ravenwood (love interest).10 We can see a clear separation between the female love interests and the male protagonists and antagonists, thus identifying similar roles in the same way as 10http://bit.ly/30ZMhhj a persona model. However, whereas the output of a persona model is distributions over personas and vocabulary, our system outputs scores along known dimensions of power, agency, and sentiment, which are easy to interpret and visualize. Furthermore, our approach is meaningful at the level of an individual document or sentence. The affect scores in Indiana Jones reveal some of the limitations of our approach. Figure 4 shows the sentiment scores for these characters. While Indiana Jones and Marion have similar sentiment scores, Belloq is portrayed surprisingly positively. In reading the plot summary, Belloq’s role in the narrative is often not obvious through immediate context. While the Joker “burns” and “rigs explosives”, Belloq “arrives” and “performs a ceremonial opening”. The reader understands Belloq’s role in the story through context in the broader story line, rather than context immediately surrounding mentions of Belloq. The sentence-level embeddings produced by ELMo do not capture the broader role of characters in narratives. Finally, our model (as well as the persona model) does not specifically account for perspective. For example, character deaths are often scored as a negative portrayal. Death may be a negative event, and often villains (i.e Belloq) die, allowing us to capture their role as negative characters. However, “good” characters also often die in stories, and in these cases, the reader tends to view the character positively (i.e. with sympathy). Our approach does not explicitly control for perspective, separating how an event may be negative from the perspective of a character but generate positive sentiment from the reader. The incorporation of connotations frames (Rashkin et al., 2016), in which annotations are along clearly defined perspectives, may offer a way to improve our approach. 5 Usage Example: Analysis of Gender Bias in Media In this section, we use our proposed methods to analyze how men and women are portrayed in the media, focusing on domains of interest in prior NLP work (Wagner et al., 2015; Fu et al., 2016). We use the NOW corpus and regression with ELMo embeddings for analysis.12 First, we return to the example from §4.2, the list of most powerful people from Forbes Maga12ASP results are nearly identical. 2557 Power Score .40 .45 .50 .55 .60 .65 B. Netanyahu JamieDimon C. Slim Helu Warren Buffett Jeff Bezos Theresa May Li Keqiang Mario Draghi M. Zuckerberg Narendra Larry Page Bill Gates Janet Yellen Pope Francis Xi Jinping Angela Merkel Donald Trump Vladimir Putin Power + Freq Score .0 .3 .5 .8 1.0 1.3 B. Netanyahu JamieDimon C. Slim Helu Warren Buffett Jeff Bezos Theresa May Li Keqiang Mario Draghi M. Zuckerberg Narendra Modi Larry Page Bill Gates Janet Yellen Pope Francis Xi Jinping Angela Merkel Donald Trump Vladimir Putin Figure 5: Power scores for people on the 2016 Forbes Magazine power list as learned through regression with ELMo embeddings, and through combined regression and frequency scores. Women are generally scored lower than similarly ranked men.11 zine in 2016. Figure 5 shows the power scores ordered from least powerful to most powerful according to the Forbes list. We show both the raw power scores computed by our model, as well as the regression power scores combined with frequency metric (as in Table 3). In the raw scores, stand-out powerful people include businessman Warren Buffet and Pope Francis. In contrast, the only 3 women, Theresa May, Janet Yellen, and Angela Merkel, are underscored as compared to similarly ranked men. However, when we incorporate frequency, we do not see the same underscoring. This result suggests that although these women are portrayed frequently in the media, they are typically described as less powerful than their actual role in society.13 This finding is consistent with prior work on portrayals of women (Wagner et al., 2015). The most striking difference after the incorporation of frequency scores is the boosted power score for Donald Trump, who is mentioned much more frequently than other entities. In Figure 6, we show the sentiment and power (combined regression + frequency) scores for the 13We note that the portrayals of other people with the same first names in the training data may have biased ELMo embeddings Sentiment Score .40 .50 .60 .70 Murray Djokovic Wawrinka Raonic Nishikori ------------Kerber Williams Radwansk Halep Cibulkova Power + Freq Score 0.4 0.6 0.8 1 1.2 Murray Djokovic Wawrinka Raonic Nishikori ------------Kerber Williams Radwansk Halep Cibulkova Figure 6: Sentiment and power scores for the topranked male (left) and female (right) tennis players in 2016 through regression with ELMo embeddings (power scores combine regression scores with frequency counts). Women are generally portrayed with lower power and higher sentiment. top-ranked male and female tennis players in 2016. Prior work has shown bias in news coverage of male and female tennis players, specifically, that male players are typically asked questions more focused on the game than female players (Fu et al., 2016). Our analysis focuses on a different data set and coverage type—we examine general articles rather than post-match interviews. As expected, popular players Serena Williams and Andy Murray have the highest sentiment scores and very high power scores. In contrast, Novak Djokovic, who has notoriously been less popular than his peers, has the lowest sentiment score, but the second highest power score (after Williams). Additionally, female players are typically portrayed with more positive sentiment (female average score = 0.58; male average score = 0.54), whereas male players are portrayed with higher power (female average score = 0.52; male average score = 0.57). However, the difference in power disappears when we remove frequency from the metric and use only the regression scores, suggesting that the difference occurs because male players are mentioned more frequently. 6 Related Work The most similar prior work to ours uses contextualized embeddings to map connotation frames (verb annotations) into power, agency, and sentiment scores for entities (Field et al., 2019). In 2558 contrast, our method scores entities directly, allowing it to incorporate more information than just verb features and eliminating the need for dependency parsing. Furthermore, unlike the connotation frame annotations (Rashkin et al., 2016; Sap et al., 2017), the VAD lexicons used in this work were specifically motivated by social psychology literature on this topic, which influenced the annotation scheme (Mohammad, 2018). Our analysis in §4.2 suggests that while Field et al. (2019) works better for out-of-domain data, our proposed methods are able to obtain finer-grained and more accurate scores for in-domain data. Prior to the proposed power, agency, and sentiment framework, initial approaches to personcentric analyses used graphical models to identify personas in narratives (Bamman et al., 2013; Card et al., 2016; Iyyer et al., 2016; Chaturvedi et al., 2017), where personas are distributions over nouns, adjectives and verbs. These models allow for identifying roles in stories, such as Batman and Iron Man are both characters who “shoot”, “aim”, and “overpower”. While this approach is useful for processing unstructured texts, personas are limited to distributions over a discrete vocabulary, and rely only on nouns, adjectives and verbs modifiers. In contrast, contextualized word embeddings have the power to capture all context in a sentence and provide more nuanced representations, especially considering non-contextualized embeddings have been shown to reflect biases in society (Garg et al., 2018). Furthermore, persona models can be difficult to interpret, whereas our analysis is grounded in concrete affect dimensions. Other approaches that broadly address how people are portrayed use domain-specific features to target particular hypotheses. Fast et al. (2016) analyze characters in fiction through crowd-sourced lexicons that target gender stereotypes. While useful for identifying bias, this method is limited to discrete modifiers and targeted lexicons do not necessarily generalize to other domains. Wagner et al. (2015) similarly use domain-specific knowledge to analyze coverage of men and women on Wikipedia, incorporating metadata like links between pages. Most affective NLP analyses of narratives focus on sentiment or specific stereotypes. Studies of power have largely been limited to a dialog setting (e.g. Danescu-Niculescu-Mizil et al. (2012), see Prabhakaran (2015) for an overview), and almost no work has examined agency, with the exception of connotation frames. Several recent works have evaluated the usefulness of pre-trained contextualized word embeddings in existing NLP tasks as well as through new benchmarks, designed to distill what type of information these models encode (Tenney et al., 2019; Goldberg, 2019; Liu et al., 2019). These investigations focus on syntactic tasks, with semantic evaluations primarily limited to semantic role labeling. To the best of our knowledge, this is the first work to target affective dimensions in pre-trained contextualized word embeddings. Our findings are consistent with prior work suggesting that contextualized embeddings capture biases from training data (Zhao et al., 2019; Kurita et al., 2019) and that these models perform best when trained on in-domain data (Alsentzer et al., 2019). 7 Conclusions and Future Work We propose a method for incorporating contextualized word embeddings into entity-centric analyses, which has direct applications to numerous social science tasks. Our results are easy to interpret and readily generalize to a variety of research questions. However, we further expose several limitations to this method, specifically that contextualized word embeddings are biased towards representations from their training data, which limits their usefulness in new domains. While we explore masking target words as a possible solution to this problem, we find that masking significantly decreases performance. We leave alternative solutions for future work, including training embeddings from scratch or fine-tuning on the target corpus (however, these ideas are only feasible with a large target corpus, and the need for fine-tuning reduces the usefulness of pre-trained embeddings). Despite this limitation, we find that these models are expressive enough to analyze entity portrayals in in-domain data, allowing us to examine different portrayals of men and women. Acknowledgments We gratefully thank anonymous reviewers, area chairs, Arnav Kumar, and Daniel Spokoyny. This material is based on work supported by the NSF GRFP under Grant No. DGE1745016 and by Grant No. IIS1812327 from the NSF. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the NSF. 2559 References Areej Alhothali and Jesse Hoey. 2015. Good news or bad news: using affect control theory to analyze readers’ reaction towards news articles. In Proc. of NAACL. Emily Alsentzer, John R Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In NAACL Clinical NLP Workshop. David Bamman, Brendan O’Connor, and Noah A. Smith. 2013. Learning latent personas of film characters. In Proc. of ACL. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proc. of NeurIPS. Dallas Card, Justin Gross, Amber Boydstun, and Noah A Smith. 2016. Analyzing framing through the casts of characters in the news. In Proc. of EMNLP. Snigdha Chaturvedi, Mohit Iyyer, and Hal Daume III. 2017. Unsupervised learning of evolving relationships between literary characters. In Proc. of AAAI. Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: Language effects and power differences in social interaction. In Proc. of WWW. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL. Ethan Fast, Tina Vachovsky, and Michael S Bernstein. 2016. Shirtless and dangerous: Quantifying linguistic signals of gender bias in an online fiction writing community. In Proc. of ICWSM. Anjalie Field, Gayatri Bhat, and Yulia Tsvetkov. 2019. Contextual affective analysis: A case study of people portrayals in online #metoo stories. In Proc. of ICWSM. Liye Fu, Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2016. Tie-breaker: using language models to quantify gender bias in sports journalism. IJCAI workshop on NLP meets Journalism. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv preprint arXiv:1901.05287. David R Heise. 2007. Expressive order: Confirming sentiments in social actions. Springer Science & Business Media. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daum´e III. 2016. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. In Proc. of NAACL. Kenneth Joseph, Wei Wei, and Kathleen M Carley. 2017. Girls rule, boys drool: Extracting semantic and affective stereotypes from twitter. In Proc. of CSCW. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proc. of Workshop on Gender Bias for NLP. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proc. of NAACL. Saif M. Mohammad. 2018. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In Proc. of ACL. C.E. Osgood, G.J. Suci, and P.H. Tannenbaum. 1957. The Measurement of Meaning. Illini Books, IB47. University of Illinois Press. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Vinodkumar Prabhakaran. 2015. Social Power in Interactions: Computational Analysis and Detection of Power Relations. Ph.D. thesis, Columbia University. Alec Radford, Karthik Narasimhan, and Tim Salimans. 2018. Improving language understanding with unsupervised learning. Technical report, OpenAI. Hannah Rashkin, Sameer Singh, and Yejin Choi. 2016. Connotation frames: A data-driven investigation. In Proc. of ACL. James A Russell. 1980. A circumplex model of affect. Journal of personality and social psychology, 39(6):1161. James A Russell. 2003. Core affect and the psychological construction of emotion. Psychological review, 110(1):145. Maarten Sap, Marcella Cindy Prasetio, Ari Holtzman, Hannah Rashkin, and Yejin Choi. 2017. Connotation frames of power and agency in modern films. In Proc. of EMNLP. 2560 Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In Proc. of ICLR. Claudia Wagner, David Garcia, Mohsen Jadidi, and Markus Strohmaier. 2015. It’s a man’s Wikipedia? Assessing gender inequality in an online encyclopedia. In Proc. of ICWSM. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proc. of NAACL. A Appendix 0 0.025 0.05 0.075 0.1 0.125 Power 0 0.0 0.1 0.1 0.2 Sentiment 0 0.025 0.05 0.075 0.1 Agency Figure 7: Percent of variance explained by the top 10 principle components for each affect dimension using ELMo embeddings. PCA was conducted on at least 100 embeddings per affect trait designed to have the greatest degree of variance along the dimension of the target affect trait B Appendix When using ELMo embeddings, we keep only the middle (second) ELMo layer, due to our preliminary investigations as well as prior work suggesting that this layer captures the most semantic information (Peters et al., 2018). When constructing embeddings for multi-word entities we keep the embedding for the first word. The BERT model uses WordPiece embeddings (Wu et al., 2016), which can result in subwordlevel embeddings rather than word-level embeddings. In the case that a word is tokenized into subwords, we keep only the embedding for the first token in the word. We use the BERT Base Uncased model, and we use mean pooling to combine the 12 embedding layers into a single embedding with 768 dimensions. We train hyper-parameters over the dev set, maximizing for Pearson correlation between the gold VAD annotations and the scores predicted by our models. We fix hyperparamters |L|, |H|, and N as (400, 300, 200) for power, (900, 200, 100) for sentiment, and (400, 300, 200) for agency. In the regression model, we use an RBF kernel and fix α = 0.6 and γ = 1. All embeddings are normalized to unit length.
2019
243
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2561–2571 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2561 Sentence-Level Evidence Embedding for Claim Verification with Hierarchical Attention Networks Jing Ma1, Wei Gao2, Shafiq Joty3,4, Kam-Fai Wong1,5 1The Chinese University of Hong Kong, Hong Kong SAR 2Victoria University of Wellington, New Zealand 3Nanyang Technological University, Singapore 4Salesforce Research Asia, Singapore 5MoE Key Laboratory of High Confidence Software Technologies, China 1{majing,kfwong}@se.cuhk.edu.hk [email protected], [email protected] Abstract Claim verification is generally a task of verifying the veracity of a given claim, which is critical to many downstream applications. It is cumbersome and inefficient for human fact-checkers to find consistent pieces of evidence, from which solid verdict could be inferred against the claim. In this paper, we propose a novel end-to-end hierarchical attention network focusing on learning to represent coherent evidence as well as their semantic relatedness with the claim. Our model consists of three main components: 1) A coherence-based attention layer embeds coherent evidence considering the claim and sentences from relevant articles; 2) An entailment-based attention layer attends on sentences that can semantically infer the claim on top of the first attention; and 3) An output layer predicts the verdict based on the embedded evidence. Experimental results on three public benchmark datasets show that our proposed model outperforms a set of state-of-the-art baselines. 1 Introduction The increasing popularity of social media has drastically changed how our daily news are produced, disseminated and consumed.1 Without systematic moderation, a large volume of information based on false or unverified claims (e.g., fake news, rumours, propagandas, etc.) can proliferate online. Such misinformation poses unprecedented challenges to information credibility, which traditionally relies on fact-checkers to manually assess whether specific claims are true or not. Despite the increased demand, the effectiveness and efficiency of human fact-checking is handicapped by the volume and fast pace the noteworthy 1The latest Pew Research statistics show that 68% American adults at least occasionally get news on social media. http://www.pewinternet.org/2018/03/ 01/social-media-use-in-2018/ claims being produced on daily basis. Therefore, it is an urgent need to automate the process and ease the human burden in assessing the veracity of claims (Thorne and Vlachos, 2018). Not surprisingly, various methods for automatic claim verification have been proposed using machine learning. Typically, given the claims, models are learned from auxiliary relevant sources such as news articles or social media responses for capturing words and linguistic units that might indicate viewpoint or language style towards the claim (Jin et al., 2016; Rashkin et al., 2017; Popat et al., 2017; Volkova et al., 2017; Dungs et al., 2018). However, the factuality of a claim is independent of people’s belief and subjective language use, and human perception is unconsciously prone to misinformation due to the common cognitive biases such as naive realism (Reed et al., 2013) and confirmation bias (Nickerson, 1998). A recent trend is that researchers are trying to establish more objective tasks and evidence-based verification solutions, which focus on the use of evidence obtained from more reliable sources, e.g., encyclopedia articles, verified news, etc., as an important distinguishing factor (Thorne and Vlachos, 2018). Ferreira and Vlachos (2016) use news headlines as evidence to predict whether it is for, against or observing a claim. In the Fake News Challenge2, the body text of an article is used as evidence to detect the stances relative to the claim made in the headline. Thorne et al. (2018a) formulate the Fact Extraction and VERification (FEVER) task which requires extracting evidence from Wikipedia and synthesizing information from multiple documents to verify the claim. Popat et al. (2018) propose DeClarE, an evidence-aware neural attention model to aggregate salient words from source news articles as the 2http://www.fakenewschallenge.org/ 2562 c: The test of a 5G cellular network is the cause of unexplained bird deaths occurring in a park in The Hague, Netherlands. Verdict: False s1: [Contradict]: Lots of tests going on with it in the Netherlands, but there haven’t been test done in The Haque during the time that the mysterious starling deaths occurred. s2: [Contradict]: One such test did occur in an area generally near Huijgenspark, but it took place on 28 June 2018. s3: [Entail]: It’s not clear whether tests with 5G have been carried out again, but so far everything points in the direction of 5G as the most probable cause. s4: [Neutral]: Between Friday, 19 Oct and Saturday, 3 Nov 2018, 337 dead starlings and 2 dead common wood pigeons were found. s5: [Entail]: The radiation created on the attempt of 5G cellular networks are not harmful only for birds but also for humans too. s6: [Neutral]: 5G network developers promise faster data rates in addition to reduce energy and financial cost. s7: [Neutral]: Parts of the park are blocked and dogs are no longer allowed to be let out, the dead birds are always cleaned up as quickly as possible. Table 1: Sentences topically coherent (s1–s4) and not coherent (s5–s7) with each other relative to the claim c, where their semantic entailment relations (i.e., entail, contradict, neural) with c are shown. main evidence to obtain claim-specific representation based on the attention score of each token. Inspired by the FEVER task (Thorne et al., 2018a) and DeClarE (Popat et al., 2018), we propose our approach to claim verification by using representation learning to embed sentence-level evidences based on coherence modeling and natural language inference (NLI). The example in Table 1 illustrates our general idea: given a claim “The test of a 5G cellular network is the cause of unexplained bird deaths occurring in a park in The Hague, Netherlands” and its relevant articles, we try to embed into the claim-specific representation those evidential sentences (e.g., s1–s4) that are not only topically coherent among themselves considering the claim, but could also semantically infer the claim based on textual entailment relations such as entail, contradict, and neutral. It is hypothesized that sentence-level evidence can convey more complete and deeper semantics, thus providing stronger NLI capacity between claim and evidence, which would result in better claimspecific representation for the more accurate factchecking decision. In this work, we propose an end-to-end hierarchical attention network for sentence-level evidence embedding that aims to attend on important sentences (i.e., evidence) by considering their topical coherence and semantic inference strength. Different from DeclarE (Popat et al., 2018), our model can determine the verdict of a claim more reasonably with evidential sentences embedded into the learned claim representation. Meanwhile, with the help of attention, crucial evidence can be highlighted and referred for better interpretability of the verdict. Our model is also advantageous over pipeline methods such as Neural Semantic Matching Network (NSMN) (Nie et al., 2019) which topped the FEVER shared task (Thorne et al., 2018b), because our model can be trained to address evidence representation learning directly rather than rank and select sentences semantically similar to the claim. Our contributions are summarized as follows: • We propose a novel claim verification framework based on hierarchical attention neural networks to learn sentence-level evidence embeddings to obtain claim-specific representation. • We use a co-attention mechanism to model sentence coherence and integrate the coherenceand entailment-based attentions into our proposed hierarchical attention framework for better evidence embedding. • We experimentally confirm that our method is much more effective than several state-of-theart claim verification models using three public benchmark datasets collected from snopes.com, politifact.com and Wikipedia. 2 Related Work The literature on fact-checking and credibility assessment has been reviewed by several comprehensive surveys (Shu et al., 2017; Zubiaga et al., 2018; Kumar and Shah, 2018; Sharma et al., 2019). We only briefly review prior works closely related to ours. Many studies on claim verification extracted veracity-indicative features that can reflect stances and writing styles from relevant texts such as news articles, microblog posts, etc. and used the traditional supervised models to learn the parameters (Castillo et al., 2011; Qazvinian et al., 2011; Rubin et al., 2016; Ferreira and Vlachos, 2016; Rashkin et al., 2017). Deep learning models such as recurrent neural networks (RNN) (Ma et al., 2016), convolutional neural networks (CNN) (Wang, 2017) and recursive neural 2563 networks (Ma et al., 2018) were also exploited to learn the feature representations. More recently, semantic matching methods were proposed to retrieve evidence from relatively trustworthy sources such as checked news and Wikipedia articles. Popat et al. (2018) attempted to debunk false claims by learning claim representations from relevant articles using an attention mechanism to focus on words that are closely related to the claim. Following NLI (Bowman et al., 2015), which is a task of classifying the relationship between a pair of sentences, composed by a premise and a hypothesis, as Entails, Contradicts or Neutral, Thorne et al. (2018a) formulated claim verification as a task that aims to classify claims into Supported, Refuted or Not Enough Info (NEI). They released a large dataset containing mutated claims based on relevant Wikipedia articles and developed a basic pipeline with document retrieval, sentence selection, and NLI modules. Similar pipelines were developed by most of the participating teams (Nie et al., 2019; Padia et al., 2018; Alhindi et al., 2018; Hanselowski et al., 2018) in FEVER shared task (Thorne et al., 2018b). Apart from the document retrieval function, our model is end-to-end and aims to learn sentence-level evidence with a hierarchical attention framework. Attention is in general used to attend on the most important part of texts, and has been successfully applied in machine translation (Luong et al.), question answering (Xiong et al., 2016) and parsing (Dozat and Manning, 2016), and is adopted in our model for attending on important sentences as evidence. Our work is also related to coherence modeling. Different from traditional coherence studies focusing on discourse coherence among sentences that are widely applied in text generation (Park and Kim, 2015; Kiddon et al., 2016) and summarization (Logeswaran et al., 2018), we try to capture evidential sentences topically coherent not only among themselves but also with respect to the target claim. 3 Problem Statement We define a claim verification dataset as {C}, where each instance C = (y, c, S) is a tuple representing a given claim c which is associated with a ground-truth label y and a set of n sentences S = {si}n i=1 from the relevant documents of the claim. We assume the relevant documents are retrieved from text collections containing variable number of sentences, and we disregard the order of sentences and which documents they are from. Our task is to classify an instance into a class defined by the specific dataset, such as veracity class labels, e.g., True/False, or NLI-style class labels, e.g., Supported/Refuted/NEI. Our approach exploits and integrates two core semantic relations: 1) coherence of the sentences given the claim; 2) entailment relation between the claim and each sentence, which are described more specifically below. Coherence Evaluation: According to the coherence theory of truth, the truth of any (true) proposition consists in its coherence with some specified set of propositions (Young, 2018). In order to focus on the useful evidence in a set of relevant sentences S, we propose a coherence-based attention component by cross-checking if any sentence si ∈S coheres well with the claim and with other sentences in S in terms of topical consistency. Textual Entailment: Entailment is used to measure whether a piece of evidence semantically infers a given claim. We propose an entailmentbased attention component that can be pre-trained to capture entailment relations (Dagan et al., 2010; Bowman et al., 2015) based on sentence pairs labeled with NLI-specific classes: entails, contradicts and neutral. This pre-trained component together with the entire claim verification framework then will be trained end-to-end to attend on the salient sentences for inferring the claim. 4 End-to-End Claim Verification Model In this section, we introduce our end-to-end hierarchical attention network for claim verification, which consist of two attention layers, i.e., coherence-based attention and entailment-based attention, for learning evidence embeddings. Figure 1 gives an overview of our framework, which will be depicted in detail in the subsections. 4.1 Sentence Representation Given a word sequence T = (w1 . . . wt . . . w|T|) which could be either a claim or a sentence, each wt ∈Rd is d-dimensional vector which can be initialized with pre-trained word embeddings. We map each wt into a fixed-sized hidden vector using standard GRU (Cho et al., 2014). We then obtain the sentence-level representation for a claim c and each sentence si ∈S using two GRU-based RNN 2564 Figure 1: Our end-to-end hierarchical attention networks for claim verification. encoders (one for c and the other for si): hc = h|c| = GRU(w|c|, h|c|−1, θc) hsi = h|si| = GRU(w|si|, h|si|−1, θS) (1) where |.| denotes the number of words, w|c| is the last word of c, w|si| is the last word of si, θc contains the claim encoder parameters, θS contains the sentence encoder parameters, and hc, hsi ∈ R1×l are l-dimensional vectors. 4.2 Coherence-based Evidence Attention Our assumption is that sentences used as evidence should be topically coherent given a claim. For example, for the claim in Table 1, which is about the connection between 5G test and birds’ death in a park in Hague, the sentences s1-s4 are topically coherent by specifically addressing the event’s detail while s5-s7 are marginal as s6 and s7 diverge from the focus and s5 is a too general statement even though it might imply a possibility. Our model cross-checks all the sentences to capture the coherence among them using an attention mechanism. We consider the relation from two perspectives: 1) global coherence measures the consistency of each sentence regarding the entire set as a whole; and 2) local coherence measures the consistency of each sentence considering its relation with another sentence. For each si, we use a biaffine attention (Dozat and Manning, 2016), which naturally fits our problem, to get the attention weights: ˜ai = (HS · Wc) · h⊤ si + HS · u⊤ ˜αi = softmax(˜ai) (2) where HS = [hs1; . . . ; hsn] ∈Rn×l is the matrix representing all sentences, and Wc ∈Rl×l and u ∈R1×l contain the weights of the biaffine transformation. The term HS · u⊤∈Rn×1 denotes the global coherence where each element is a prior probability of a sentence sj being coherent with any sentences in S; the term (HS · Wc) · h⊤ si ∈ Rn×1 is the local coherence where each element hsj · Wc · h⊤ si represents the relative likelihood of sj being coherent with si. Therefore, ˜αi ∈Rn×1 is a n-dimensional weight vector for si where each element ˜αij for j ∈[1, . . . , n] denotes the coherence attention weight between si and sj. Extension of Coherence Attention The coherence attention in Eq. 2 ignores the claim information. To prevent off-topic coherence which deviates from claim’s focus, we propose to assess each sentence’s coherence by jointly considering the claim and all sentences, which shares a similar intuition with the co-attention method in questionanswering (Lu et al., 2016; Xiong et al., 2016). Unlike the question-answer co-attention focusing on mutual selection of salient words in question and documents, we focus on sentence-level attention, for which we have multiple sentences but only one claim. So, we only need a claim-guided sentence attention. We use a gating unit to endow the model with the capacity of deciding how much information it should accept from the claim. The new attention weight of si is computed by: ¯hsi = gc→si ⊙hsi + (1 −gc→si) ⊙hc ¯ai = ( ¯HS · Wc) · ¯h⊤ si + ¯HS · u⊤ ¯αi = softmax(¯ai) (3) where gc→si = σ(Wg · hsi + Ug · hc) is the gate function with trainable parameters Wg and Ug, ¯HS = [¯hs1; . . . ; ¯hsn] denotes the stacked output 2565 of the gating unit, and other settings are same as the biaffine coherence attention (see Eq. 2). Based on the attention weights, each sentence can be represented as the weighted sum of all sentences, capturing its overall coherence: h′ si = X j αij · hsj (4) where αij is the attention weight between si and sj obtained from Eq. 2 (˜αi) or Eq. 3 (¯αi). Finally, we concatenate the coherence-based sentence embedding h′ si with the original embedding hsi to obtain a richer sentence representation: ˜hsi = tanh(Wco · [hsi, h′ si] + bco) (5) where Wco and bco are parameters for transforming the concatenation into a l-dimensional vector. 4.3 Entailment-based Evidence Attention We further enhance the sentence representation by capturing the entailment relations between the sentences and the claim based on the NLI method (Bowman et al., 2015) for strengthening the semantic inference capacity of our model. Given c and si, we represent each such pair by integrating three matching functions between hc and ˜hsi: 1) concatenation [hc, ˜hsi]; 2) elementwise product hc ⊙˜hsi; and 3) absolute elementwise difference |hc −˜hsi|. The similar matching scheme was commonly used to train NLI models (Conneau et al., 2017; Mou et al., 2016; Liu et al., 2016; Chen et al., 2016). We then perform a transformation to obtain the joint representation hc si as follow: hc si = tanh  We · h hc, ˜hsi, hc ⊙˜hsi, |hc −˜hsi| i (6) where We are trainable weights for transforming the long concatenation into an l-dimensional vector. We omit the bias to avoid notational clutter. To capture entailment-based evidence, we again apply attention over the original sentences guided by the joint representation hc si which is obtained on top of the coherence attention. This yields: bi = tanh(Ve · hc si + be) βi = exp(bi) P i exp(bi) hc S = X i βi · hsi (7) where Ve and be are parameters turning hc si to an entailment score bi, βi is the entailment-based attention weight of si which is used to produce the final representation hc S of an entire instance. Note that the hierarchy of our attention structure is conveyed by the query part hc si, and we apply the weight βi on the original representation hsi rather than h′ si (Eq. 4) or ˜hsi (Eq. 5), which is empirically better based on our trials since the latter two may contain more redundant information due to the sum over an entire set when computing h′ si. 4.4 The Overall Model The attention vector hc S is the high-level representation of the claim with the embedded evidence based on the hierarchical attention method. We use a fully connected output layer to output the probability distribution over the veracity classes: ˆy = softmax(Vo · hc S + bo) (8) where Vo and bo are the weights and bias in output layer. Note that Eq. 8 assumes that using hc S alone can determine the veracity as true or false without direct reference to the claim again. This may be suitable for news data as the salient news sentences often straightforwardly comment on the claim’s veracity. However, some claim verification tasks such as FEVER (Thorne et al., 2018a) are particularly defined to classify if the factual evidence from the source like Wikipedia, which rarely remark on the veracity of the mutated claim, can infer the claim as being supported, refuted or NEI. In such case, we replace hc S in Eq. 8 with the richer representation ˆhc S = [hc, hc S, hc ⊙hc S, |hc −hc S|] to facilitate the inference from the evidence to the claim in accordance with such NLI style of the task definition. Interestingly, such treatment does not work for veracity classification of news claim (see Section 5.2), which may be because the veracity features of news claim have been already embedded into hc S and the richer representation ˆhc S involving the claim could introduce unnecessary noise to a non-NLI type of task unlike FEVER. To fine-tune our model, we also pre-train the coherence- and entailment-related parameters for avoiding the sole reliance on the potentially limited supervision from the task-specific labels. Pre-training Coherence Model Without ground truth for learning the coherence model, we use a pair-wise training strategy to optimize a large margin objective. For each claim 2566 c, we randomly choose another “negative” claim c′. Then we construct a tuple (s, X+, X−), where X+ = (c, S) and X−= (c′, S′) are tuples consisting of different claims and their relevant article sentences, and s ∈S is a sentence selected randomly. Generally, (s, X+) should exhibit higher topical coherence than (s, X−) since the former reports the same claim c. We seek for parameters that assign a higher score to (s, X+) than (s, X−) by minimizing the following margin-based ranking loss: Lc = max  0, 1 + r(s, X−) −r(s, X+) (9) and r(, ) is the ranking function turning the coherence-based sentence embedding to a ranking score: r(s, X) = tanh W ′ c · cohAtt(s, X) + b′ c  (10) where cohAtt(, ) is a shorthand of Eq. 4, and W ′ c and b′ c are the weights and bias of an added ranking output layer which is not a part of our end-to-end model. The pre-trained model is used to initialize all the parameters needed for computing Eq. 4. Pre-training Entailment Model We use the Standford Natural Language Inference (SNLI) dataset (Bowman et al., 2015) to pretrain the parameters of entailment-based attention model. Specifically, we train a model for Recognizing Textual Entailment (RTE) as follow: ¯y = softmax(V ′ e · hRTE + b′ e) (11) where ¯y is the entailment class label, i.e., entails, contradicts, or neutral, hRTE has the same form as Eq. 6 while the input claim-sentence pair is replaced by a pair of premise and hypothesis in the SNLI corpus (each element is encoded by a GRU sentence encoder), and V ′ e and b′ e are the weights and bias of the RTE output layer which is not part of our end-to-end model. The pre-trained model is used to initialize the parameters We in Eq. 6. For pre-training, we minimize the square loss between the distributions of the predicted and the ground-truth entailment classes. Overall Training After pre-training, all the model parameters are trained end-to-end by minimizing the squared error between the class probability distribution of the prediction and that of the ground truth over the claims. Parameters are updated through backpropagation (Collobert et al., 2011) with AdaGrad (Duchi et al., 2011) for speeding up convergence. The training process ends when the model converges or the maximum epoch number is met. We represent input words using pre-trained GloVe Wikipedia 6B word embeddings (Pennington et al., 2014). We set d to 300 for word vectors and l to 100 for hidden units, and no parameter depends on n which varies with different claims. 5 Experiments and Results 5.1 Datasets and Evaluation Metrics We use three public fact-checking datasets for evaluation: 1) Snopes and 2) PolitiFact, released by Popat et al. (2018), containing 4,341 and 3,568 news claims, respectively, along with relevant articles collected from various web sources; 3) FEVER, released by Thorne et al. (2018a), which consists of 185,445 claims accompanied by human-annotated relevant Wikipedia articles and evidence-bearing sentences, and many claims in FEVER are human altered by mutating the original claims from Wikipedia. Each Snopes claim was labeled as true or false, while each PolitiFact claim was originally assigned one of six veracity labels: true, mostly true, half true, mostly false, false, and pants on fire. Unlike Popat et al. (2018) converting all the classes into true or false, we merge mostly true, half true and mostly false into mixed, and treat false and pants on fire as false. Thus, we have a more practical classification on PolitiFact, i.e., true, false and mixed. We use micro-/macro-averaged F1, classspecific precision, recall and F-measure as evaluation metrics. We hold out 10% of the claims for tuning the hyper parameters, and conduct 5-fold cross-validation on the rest of the claims. On FEVER dataset, each claim, which is classified as Supported, Refuted or NEI, can be verified with its ground-truth label and a set of humanannotated evidential sentences extracted from its relevant Wikipedia pages. This task is similar as predicting the entailment relation by aggregating the sentences to infer the NLI-style label of the target claim, instead of directly predicting the claim’s veracity as true or false. FEVER shared task used label accuracy, F1 score of evidential sentence selection, and FEVER score as evaluation metrics (Thorne et al., 2018b). 2567 Method Snopes PolitiFact True False True False Mixed micF1 macF1 Prec. Rec. F1 Prec. Rec. F1 micF1 macF1 F1 F1 F1 CNN 0.721 0.636 0.477 0.440 0.460 0.802 0.822 0.812 0.453 0.402 0.368 0.566 0.270 LSTM 0.689 0.642 0.441 0.512 0.517 0.834 0.716 0.771 0.463 0.413 0.452 0.561 0.228 SVM 0.704 0.649 0.459 0.584 0.511 0.832 0.747 0.786 0.450 0.421 0.440 0.547 0.277 DeClarE 0.762 0.695 0.559 0.556 0.553 0.839 0.837 0.837 0.475 0.443 0.447 0.576 0.307 HAN-na 0.750 0.674 0.535 0.500 0.517 0.821 0.841 0.831 0.470 0.431 0.456 0.594 0.242 HAN-ba 0.771 0.738 0.556 0.765 0.644 0.899 0.774 0.832 0.520 0.471 0.475 0.629 0.308 HAN 0.807 0.759 0.637 0.665 0.651 0.874 0.860 0.867 0.523 0.487 0.495 0.627 0.340 HAN-nli 0.747 0.670 0.534 0.491 0.512 0.817 0.841 0.830 0.485 0.432 0.467 0.599 0.230 Table 2: Results of comparison among different models on Snopes (left) and PolitiFact (right) datasets Method Snopes PolitiFact micF1 macF1 micF1 macF1 HAN-na 0.750 0.674 0.470 0.431 + ba 0.776 0.727 0.495 0.455 + ca 0.788 0.741 0.516 0.473 + ea 0.779 0.728 0.508 0.463 + ba + ea 0.771 0.738 0.520 0.471 + ca + ea 0.807 0.759 0.523 0.487 Table 3: Results of ablation test across different attentions on Snopes (left) and PoliFact (right) datasets. 5.2 Experiments on Veracity-based Datasets We compare our model and several state-of-the-art baseline methods described below. 1) SVM: A linear SVM model for fake news detection using a set of linguistic features (e.g., bag-of-words, ngrams, etc.) handcrafted from relevant sentences (Thorne and Vlachos, 2018); 2) CNN and LSTM: The CNN-based detection model (Wang, 2017) and LSTM-based RNN model for representation learning from word sequences (Rashkin et al., 2017), respectively, both using only claim content without considering external resources; 3) DeClarE: The word-level neural attention model for Debunking Claims with Interpretable Evidence (Popat et al., 2018) capturing world-level evidence from relevant articles; 4) HAN: Our full model based on Hierarchical Attention Networks, where coherence component uses Eq. 3; 5) HAN-ba: A variant of HAN with biaffine attention in Eq. 2; 6) HANna: Our reduced model with no attention but only using original sentence representations; 7) HANnli: A variant of HAN by replacing hc S in Eq. 8 with ˆhc S for the output layer (see Section 4.4). We implement our models and DeClarE with Theano3, and use the original codes of other baselines. As DeClarE is not yet open-source, we consult with its developers for our implementation. 3http://deeplearning.net/software/ theano/ Results of Comparison As shown in Table 2, CNN and LSTM using barely content of claims without considering external information are comparable with SVM which uses handcrafted features based on relevant article sentences. Among all the baselines, DeClarE performs the best because it not only learns to capture complex features effectively via the neural model, but also strengthens the learned features by attending on the salient words that are important for predicting the correct label. Our model can capture more accurate sentencelevel evidence which convey the semantics more completely and deeply. The superiority is clear: HAN-na which considers sentence as evidence without using attention is already better than the baselines except DeClarE, implying the importance of sentence-level information. HAN-ba and HAN using attentions to embed sentence-level evidence consistently outperform DeClarE in large margin that is based on word-level attention. HAN consistently outperforms HAN-ba on both datasets. This suggests that the co-attention considering claim for capturing sentence coherence is more effective to represent more accurate evidence. HAN-nli, however, fails to work and is even worse than DeClarE, which confirms our conjecture that veracity classification on news data differs from a NLI type of task like on FEVER (see Section 5.3) since news reports often openly remark the claim’s veracity and involving the claim in the output layer may interfere the decision. Ablation Study To evaluate the impact of each component, we perform ablation tests based on the no-attention model HAN-na plus some component(s) which can be one or combination of the following attentions: 1) ba and 2) ca correspond to the coherencebased biaffine attention (Eq. 2) and co-attention 2568 (a) Snopes dataset (b) Politifact dataset Figure 2: Results of HAN and HAN- (not pre-trained) under different sizes of training data. (Eq. 3), respectively; 3) ea: entailment-based attention (Eq. 7). As shown in Table 3, HAN-na plus each component alone improves the model, indicating their effectiveness for embedding sentence-level evidence. Furthermore, +ca consistently outperforms +ba, reaffirming the advantage of co-attention; +ea makes similar improvements over HAN-na as +ba and +ca did, suggesting that both types of attention are comparably helpful. Combining them hierarchically makes further improvements especially in the case of +ca+ea, implying that the two attention mechanisms are complementary. We also examine the impact of pre-training on HAN in comparison with its performance without pre-training, namely HAN-. In Figure 2, we observe that the pre-training does not have much impact when we use the entire training set, but it clearly improves the model when only using certain proportions of the training data. This indicates that the fine-tuned coherence and entailment models are generally helpful for claim verification, especially when the sampled set is not sufficiently large for fully training the model. Discussion Regarding the gap between the published performance of DeClarE (Plain+Attn) (Popat et al., 2018) which is 0.79 on the Snopes dataset and that of our implementation of it which is 0.759, we conjecture the reason may be that DeClarE utilized an undisclosed strategy for balancing the training datasets that we could not easily replicate, while we trained all the systems in Table 2 on the original unbalanced dataset. We leave this for further investigation in future upon the availability of DeClarE source codes. On the PolitiFact dataset, since we adopt a three-way classification, it is thus not directly comparable with the original DeClarE performance which is based on two classes. Claim: Comedian Bill Murray is running for president Verdict: False 1 It turns out it’s not true and just the subject of a hoax article by a website parodying ABC news. 2 Bill Murry is not running for president, nor has he announced that fact from his hometown. 3 Unknown Internet prankster created fake website for NBC, ABC and Fox News running the headline “Bill Murray is running for president”. ... 8 Murray made the announcement from his home in and he felt the 2016 presidential election seemed like the right time to go. 9 Paul Horner, a spokesman for the campaign, told reporters that he believes in Bill Murry for President. Table 4: Examples of attended sentences ranked by the attention weight βi that can explain the verdict. Method Acc. Prec. Rec. F1 FEVER Fever-base 0.521 − − − 0.326 NSMN 0.697 0.286 0.870 0.431 0.665 HAN-nli 0.642 0.340 0.484 0.400 0.464 HAN-nli* 0.720 0.447 0.536 0.488 0.571 HAN* 0.475 0.356 0.471 0.406 0.365 Table 5: Results of different claim verification models on FEVER dataset (Dev set). The columns correspond to the predicted label accuracy, the evidence precision, recall, F1 score, and the FEVER score. Case Study Table 4 illustrates some top sentences embedded with a claim from Snopes dataset which is correctly detected as fake. We can see that 1) the top sentences have high topical overlap with both the claim and each other; 2) the highly ranked sentences play a major role in deciding the verdict, as they remark on the claim’s veracity directly; 3) the lower sentences seem less important since they either repeat the claim or are very subjective. Providing such readable pieces of evidence to human fact-checker for verifying the claim can be helpful. 5.3 Experiments on FEVER Dataset We compare the following systems on the public Dev set4 of FEVER dataset: 1) Fever-base: The FEVER baseline (Thorne et al., 2018a) that is a pipeline for claim verification including 3 stages: document retrieval, sentence selection and textual entailment. 2) NSMN: The pipeline-based system named as UNC-NLP topping the FEVER shared task (Thorne et al., 2018b), which was later reported as using Neural Semantic Matching Networks (Nie et al., 2019). 3) HAN-nli: Our full 4The test set is not publicly available at the time of this work being done. 2569 model trained using the FEVER task dataset. Note that similar to DeClarE our model assumes that the set of articles about each claim have been retrieved, while the FEVER task requires users search relevant Wikipages in the first place. Using FEVER, our method thus is not truly end-to-end in this setting. We utilize the document retrieval module of NSMN (Nie et al., 2019) to obtain the relevant Wikipages. 4) HAN-nli*: For more fair comparison with NSMN which utilized the ground-truth sentences in the training set to train their sentence selector, we fine-tune the HAN-nli, namely HAN-nli*, by optimizing the square error loss between the entailment attention score bi (see Eq. 7) and the -1/+1 value indicating whether si is selected as a piece of evidence in the ground truth. 5) HAN*: The original HAN using Eq. 8 in the output layer and fine-tuned like HAN-nli*. Table 5 shows that HAN-nli* is much better than the two baselines in terms of label accuracy and evidence F1 score. There are two reasons: 1) apart from the retrieval module, our model optimizes all the parameters end-to-end, while the two pipeline systems may result in error propagation; and 2) our evidence embedding method considers more complex facets such as topical coherence and semantic entailment, while NSMN just focuses on similarity matching between the claim and each sentence. HAN-nli seem already a decent model given its much better performance than Fever-base. This confirms the advantage of our evidence embedding method on the FEVER task. NSMN achieves higher FEVER score and evidence recall than our method. However, the reason is straightforward: FEVER score favors recalling the annotated evidential sentences while one of the limitations of FEVER dataset is that the ground-truth sentences provided by human annotators were often incomplete (Thorne et al., 2018a,b). Our approach is not limited by selecting top-k sentences and may embed into evidence as many diverse sentences as the model requires. Compared to NSMN which aims to recall the top evidence sentences in FEVER’s ground truth, our model achieves much higher Accuracy, Evidence Precision and F1. HAN* is ineffective, confirming that in FEVER task the claim content is needed in the output layer for the NLI to take effect since the evidence from Wikipedia typically does not contain direct remarks on the veracity of a claim. Discussion The pipeline-based system NSMN demonstrates superior evidence retrieval performance in terms of FEVER score. We emphasize that the essential objective of our model is not for evidence retrieval and ranking. Instead of ranking sentences into the top-k positions, we pay more attention on claim verification accuracy by embedding and aggregating the useful sentences as evidence like we have explained above. However, such discrepancy inspires us to investigate in the future an end-to-end approach to jointly model evidence retrieval and claim verification in a unified framework based on our sentence-level attention mechanism. Finally, thanks to one of our reviewers, we learn about another two-stage model named TwoWingOS (Yin and Roth, 2018), which achieves a comparable FEVER score but a little bit higher accuracy than ours on FEVER task. The TwoWingOS applies a two-wing optimization approach to jointly optimizing sentence selection and veracity classification. The reasons regarding their higher performance might lie in that: 1) their input word embeddings are fine-tuned based on the context of the evidence and claim while ours are fixed during training; and 2) the document retrieval module of the TwoWingOS has demonstrated higher effectiveness than that of the NSMN (see rate (recall) and acc ceiling (OFEVER) in Tables 2 in (Yin and Roth, 2018; Nie et al., 2019) for details). 6 Conclusions and Future Work We propose a novel neural end-to-end framework for claim verification by learning to embed sentence-level evidence with a hierarchical attention mechanism. Our model strengthens the evidence representations by attending on the sentences that are not only topically coherent but can also semantically infer the target claim. The results on three public benchmark datasets confirm the advantages of our method. For the future work, beyond what we have mentioned, we plan to examine our model on different information sources. We will also try to incorporate relevant metadata into it, e.g., author profile, website credibility, etc. Acknowledgment This work was partly supported by Hong Kong RGC GRF (14232816, 14209416, 14204118), NSFC (61877020) and SCSE-SUG grant M4082038 at NTU. 2570 References Tariq Alhindi, Savvas Petridis, and Smaranda Muresan. 2018. Where is your evidence: Improving factchecking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 85–90. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of the 20th international conference on World wide web, pages 675–684. ACM. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016. Enhancing and combining sequential and tree LSTM for natural language inference. CoRR, abs/1609.06038. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2010. Recognizing textual entailment: Rational, evaluation and approaches. Journal of Natural Language Engineering, 4. Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency parsing. CoRR, abs/1611.01734. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. Sebastian Dungs, Ahmet Aker, Norbert Fuhr, and Kalina Bontcheva. 2018. Can rumour stance alone predict veracity? In Proceedings of the 27th International Conference on Computational Linguistics, pages 3360–3370. William Ferreira and Andreas Vlachos. 2016. Emergent: a novel data-set for stance classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies, pages 1163–1168. Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. Multi-sentence textual entailment for claim verification. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 103–108. Zhiwei Jin, Juan Cao, Yongdong Zhang, and Jiebo Luo. 2016. News verification by exploiting conflicting social viewpoints in microblogs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2972–2978. AAAI Press. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 329–339. Srijan Kumar and Neil Shah. 2018. False information on web and social media: A survey. arXiv preprint arXiv:1804.08559. Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016. Learning natural language inference using bidirectional LSTM model and inner-attention. CoRR, abs/1605.09090. Lajanugen Logeswaran, Honglak Lee, and Dragomir Radev. 2018. Sentence ordering and coherence modeling using recurrent neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. In Advances In Neural Information Processing Systems, pages 289–297. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 3818–3824. AAAI Press. Jing Ma, Wei Gao, and Kam-Fai Wong. 2018. Rumor detection on twitter with tree-structured recursive neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1980–1989. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In The 54th Annual Meeting of the Association for Computational Linguistics, page 130. 2571 Raymond S Nickerson. 1998. Confirmation bias: A ubiquitous phenomenon in many guises. Review of general psychology, 2(2):175–220. Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. Ankur Padia, Francis Ferraro, and Tim Finin. 2018. Team umbc-fever: Claim verification using semantic lexical resources. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 161–165. Cesc C Park and Gunhee Kim. 2015. Expressing an image stream with a sequence of natural sentences. In Advances in neural information processing systems, pages 73–81. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Kashyap Popat, Subhabrata Mukherjee, Jannik Str¨otgen, and Gerhard Weikum. 2017. Where the truth lies: Explaining the credibility of emerging claims on the web and social media. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 1003–1012. International World Wide Web Conferences Steering Committee. Kashyap Popat, Subhabrata Mukherjee, Andrew Yates, and Gerhard Weikum. 2018. Declare: Debunking fake news and false claims using evidence-aware deep learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 22–32. Vahed Qazvinian, Emily Rosengren, Dragomir R Radev, and Qiaozhu Mei. 2011. Rumor has it: Identifying misinformation in microblogs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1589–1599. Association for Computational Linguistics. Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931–2937. Edward S Reed, Elliot Turiel, and Terrance Brown. 2013. Naive realism in everyday life: Implications for social conflict and misunderstanding. In Values and Knowledge, pages 113–146. Psychology Press. Victoria Rubin, Niall Conroy, Yimin Chen, and Sarah Cornwell. 2016. Fake news or truth? using satirical cues to detect potentially misleading news. In Proceedings of the Second Workshop on Computational Approaches to Deception Detection, pages 7–17. Karishma Sharma, Feng Qian, He Jiang, Natali Ruchansky, Ming Zhang, and Yan Liu. 2019. Combating fake news: A survey on identification and mitigation techniques. arXiv preprint arXiv:1901.06437. Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1):22–36. James Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and future directions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3346–3359. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018a. Fever: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 809–819. James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018b. The fact extraction and verification (fever) shared task. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 1–9. Association for Computational Linguistics. Svitlana Volkova, Kyle Shaffer, Jin Yea Jang, and Nathan Hodas. 2017. Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 647–653. William Yang Wang. 2017. ” liar, liar pants on fire”: A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 422–426. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. CoRR, abs/1611.01604. Wenpeng Yin and Dan Roth. 2018. Twowingos: A two-wing optimization strategy for evidential claim verification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 105–114. James O. Young. 2018. The coherence theory of truth. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018. Detection and resolution of rumours in social media: A survey. ACM Computing Surveys (CSUR), 51(2):32.
2019
244
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2572–2582 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2572 Predicting Human Activities from User-Generated Content Steven R. Wilson and Rada Mihalcea University of Michigan {steverw,mihalcea}@umich.edu Abstract The activities we do are linked to our interests, personality, political preferences, and decisions we make about the future. In this paper, we explore the task of predicting human activities from user-generated content. We collect a dataset containing instances of social media users writing about a range of everyday activities. We then use a state-of-the-art sentence embedding framework tailored to recognize the semantics of human activities and perform an automatic clustering of these activities. We train a neural network model to make predictions about which clusters contain activities that were performed by a given user based on the text of their previous posts and selfdescription. Additionally, we explore the degree to which incorporating inferred user traits into our model helps with this prediction task. 1 Introduction What a person does says a lot about who they are. Information about the types of activities that a person engages in can provide insights about their interests (Goecks and Shavlik, 2000), personality (Ajzen, 1987), physical health (Bouchard et al., 2018), the activities that they are likely to do in the future (Ouellette and Wood, 1998), and other psychological phenomena like personal values (Rokeach, 1973). For example, it has been shown that university students who exhibit traits of interpersonal affect and self-esteem are more likely to attend parties (Paunonen and Ashton, 2001), and those that value stimulation are likely to watch movies that can be categorized as thrillers (Bardi and Schwartz, 2003). Several studies have applied computational approaches to the understanding and modeling of human behavior at scale (Yin et al., 2014) and in real time (Wang et al., 2015). However, this previous work has mainly relied on specific devices or platforms that require structured definitions of behaviors to be measured. While this leads to an accurate understanding of the types of activities being done by the involved users, these methods capture a relatively narrow set of behaviors compared to the huge range of things that people do on a day-to-day basis. On the other hand, publicly available social media data provide us with information about an extremely rich and diverse set of human activities, but the data are rarely structured or categorized, and they mostly exist in the form of natural language. Recently, however, natural language processing research has provided several examples of methodologies for extracting and representing human activities from text (Fast et al., 2016; Wilson and Mihalcea, 2017) and even multimodal data (Agrawal et al., 2016). In this paper, we explore the task of predicting human activities from user-generated text data, which will allow us to gain a deeper understanding of the kinds of everyday activities that people discuss online with one another. Throughout the paper, we use the word “activity” to refer to what an individual user does or has done in their daily life. Unlike the typical use of this term in the computer vision community (Cheng et al., 2015; Zhang et al., 2017), in this paper we use it in a broad sense, to also encompass non-visual activities such as “make vacation plans” or “have a dream” We do not focus on fine-grained sequences actions such as “pick up a camera”, “hold a camera to one’s face”, “press the shutter release button”, and others. Rather, we focus on the highlevel activity as a person would report to others: “take a picture”. Additionally, we specifically focus on everyday human activities done by the users themselves, rather than larger-scale events (Atefeh and Khreich, 2015), which are typically characterized by the involvement or interest of many users, often at a specific time and location. Given that the space of possible phrases describ2573 ing human activities is nearly limitless, we propose a set of human activity clusters that summarize a large set of several hundred-thousand selfreported activities. We then construct predictive models that are able to estimate the likelihood that a user has reported that they have performed an activity from any cluster. The paper makes the following main contributions. First, starting with a set of nearly 30,000 human activity patterns, we compile a very large dataset of more than 200,000 users undertaking one of the human activities matching these patterns, along with over 500 million total tweets from these users. Second, we use a state-of-theart sentence embedding framework tailored to recognize the semantics of human activities and create a set of activity clusters of variable granularity. Third, we explore a neural model that can predict human activities based on natural language data, and in the process also investigate the relationships between everyday human activities and other social variables such as personal values. 2 Data While we do not expect to know exactly what a person is doing at any given time, it is fairly common for people to publicly share the types of activities that they are doing by making posts, written in natural language, on social media platforms like Twitter. However, when taking a randomly sampled stream of tweets, we find that only a small fraction of the content was directly related to activities that the users were doing in the real world – instead, most instances are more conversational in nature, or contain the sharing of opinions about the world or links to websites or images. Using such a random sample would require us to filter out a large percentage of the total data collected, making the data collection process inefficient. Therefore, in order to target only those tweets that are rich in human activity content, we formulate a set of queries that allows us to use the Twitter Search API to find instances of users tweeting about common human activities. Each query contains a first-person, past-tense verb within a phrase that describes a common activity that people do. Using this approach, we are able to retrieve a set of tweets that contains a high concentration of human activity content, and we also find that users who wrote these tweets are much more likely to have written other tweets that describe human activities (Table 1). We build our set of human acSampled tweets w/valid activities 2% Queried tweets w/valid activities 81% Addtl. user tweets w/valid activities 15% Table 1: Effect of targeted query approach on activity frequency in tweets. “Valid activities” are defined as first-person verb phrases that clearly indicate that the author of the text has actually performed the concrete activity being described. For each set of tweets, a random subset of 100 was chosen and manually annotated for validity. count unique Event2Mind activities 24,537 24,537 Survey activities 5,000 4,957 Total 29,537 29,494 Table 2: Number of human activity queries from multiple sources. tivity queries from two sources: the Event2Mind dataset (Rashkin et al., 2018) and a set of short activity surveys, which we collect ourselves, to obtain nearly 30K queries (Table 2) . 2.1 Event2Mind Activities The Event2Mind dataset contains a large number of event phrases which are annotated for intent and reaction. The events themselves come from four sources of phrasal events (stories, common n-grams found in web data, blogs, and English idioms), and many of them fall under our classification of human activities, making Event2Mind a great resource in our search for concrete examples of human activities. We consider events for which a person is the subject (e.g, “PersonX listens to PersonX’s music”) to be human activities, and remove the rest (e.g., “It is Christmas morning”). We then use several simple rules to convert the Event2Mind instances into first-person past-tense activities. Since all events were already filtered so that they begin with “PersonX”, we replace the first occurrence of “PersonX” in each event with “I” and all subsequent occurrences with “me”. All occurrences of “PersonX’s” become “my”, and the main verb in each phrase is conjugated to its pasttense form using the Pattern python module.1 For example, the event “PersonX teaches PersonX’s son” becomes the query “I taught my son”. Since Event2Mind also contains wildcard placeholders that can match any span of text within the same 1www.clips.uantwerpen.be/pattern 2574 Total queries 29,494 Queried tweets 422,607 Avg. tweets/query 14.33 Valid queried tweets 335,357 Avg. valid tweets/query 11.37 Table 3: Summary of query results. phrase (e.g., “PersonX buys at the store”)2 but the Twitter API doesn’t provide a mechanism for wildcard search, we split the event on the string and generate a query that requires all substrings to appear in the tweet. We then check all candidate tweets after retrieval and remove any for which the substrings do not appear in the same order as the original pattern. 2.2 Short Survey Activities In order to get an even richer set of human activities, we also ask a set of 1,000 people across the United States to list any five activities that they had done in the past week. We collect our responses using Amazon Mechanical Turk,3 and manually verify that all responses are reasonable. We remove any duplicate strings and automatically convert them into first-person and past-tense (if they were not in that form already). For this set of queries, there are no wildcards and we only search for exact matches. Example queries obtained using this approach include “I went to the gym” and “I watched a documentary”. 2.3 Query Results Using our combined set of unique human activity queries, we use the Twitter Search API4 to collect the most recent 100 matches per query (the maximum allowed by the API per request), as available, and we refer to these tweets as our set of queried tweets. We then filter the queried tweets as follows: first, we verify that for any tweets requiring the match of multiple substrings (due to wildcards in the original activity phrase), the substrings appear in the correct order and do not span multiple sentences. Next, we remove activity phrases that are preceded with indications that the author of the tweet did not actually perform the activity, such as “I wish” or “should I . . . ?”. We refer to the set 2We also treat instance of “PersonY” as a wildcard since this could be any name or even a user (@) mention on Twitter. 3www.mturk.com 4developer.twitter.com/en/docs/tweets/search/apireference/get-search-tweets.html Num. unique users 358,091 Additional tweets collected 560,526,633 Avg. additional tweets / user 1,565 Additional activities extracted 21,316,364 Avg. additional activities / user 59.52 Table 4: Summary of additional data. Initial number unique users 358,091 Users with non-empty profiles 96.9% Users with ≥1 addtl. tweets 94.9% Users with ≥25 addtl. tweets 93.1% Users with ≥1 addtl. activities 93.5% Users with ≥5 addtl. activities 87.1% Final number unique valid users 214,708 Table 5: Summary valid user filtering. of tweets left after this filtering as valid queried tweets (see Table 3 for more details). In order to gather other potentially useful information about the users who wrote at least one valid queried tweet, we collect both their self-written profile and their previously written tweets (up to 3,200 past tweets per user, as allowed by the Twitter API), and we refer to these as our set of additional tweets. We ensure that there is no overlap between the sets of queried tweets and additional tweets, so in the unlikely case that a user has posted the same tweet multiple times, it cannot be included in both sets. Further, we use a simple pattern-matching approach to extract additional activities from these additional tweets. We search for strings that match I <VBD> .* <EOS> where <VBD> is any past-tense verb, .* matches any string (nongreedy), and <EOS> matches the end of a sentence. We then perform the same filtering as before for indications that the person did not actually do the activity, and we refer to these filtered matches as our set of additional activities (see Table 4 for more information). Note that since these additional activities can contain any range of verbs, they are naturally noisier than our set of valid query tweets, and we therefore do not treat them as a reliable “ground truth” source of selfreported human activities, but as a potentially useful signal of activity-related information that can be associated with users in our dataset. For our final dataset, we also filter our set of users. From the set of users who posted at least one valid queried tweet, we remove those who had empty user profiles, those with less than 25 addi2575 tional tweets, and those with less than 5 additional activities (Table 5). 2.4 Creating Human Activity Clusters Given that the set of possible human activity phrases is extremely large and it is unlikely that the same phrase will appear multiple times, we make this space more manageable by first performing a clustering over the set of activity phrase instances that we extract from all valid queried tweets. We define an activity phrase instance as the set of words matching an activity query, plus all following words through the end of the sentence in which the match appears. By doing this clustering, our models will be able to make a prediction about the likelihood that a user has mentioned activities from each cluster, rather than only making predictions about a single point in the semantic space of human activities. In order to cluster our activity phrase instances, we need to define a notion of distance between any pair of instances. For this, we turn to prior work on models to determine semantic similarity between human activity phrases (Zhang et al., 2018) in which the authors utilized transfer learning in order to fine-tune the Infersent (Conneau et al., 2017) sentence similarity model to specifically capture relationships between human activity phrases. We use the authors’ BiLSTM-max sentence encoder trained to capture the relatedness dimension of human activity phrases5 to obtain vector representations of each of our activity phrases. The measure of distance between vectors produced by this model was shown to be strongly correlated with human judgments of general activity relatedness (Spearman’s ρ = .722 between the model and human ratings, while inter-annotator agreement is .768). While the relationship between two activity phrases can be defined in a number of ways (Wilson and Mihalcea, 2017), we we chose a model that was optimized to capture relatedness so that our clusters would contain groups of related activities without enforcing that they are strictly the same activity. Since the model that we employed was trained on activity phrases in the infinitive form, we again use the Pattern python library, this time to convert all of our past-tense activities to this form. We also omit the leading first person pronoun from each phrase, and remove user mentions (@<user>), hashtags, and URLs. We then 5Shared by the first author of the referenced paper. “Cooking” make cauliflower stir-fry for dinner make garlic and olive oil vermicelli for lunch start cooking bacon in the oven (on foil in a sheet) burn the turkey make perfect swordfish steaks tonight “Pet/Animal related” get a new pet spider today cuddle 4 dogs get a pet sitter feel so happy being able to pet kitties today spend some time with cats “Spectating” watch football italia watch a football game in the pub watch basketball today watch sports watch fireworks today in the theatre “Passing Examinations” ace the exam pass one’s exam thank god get a perfect score on one’s exam get a c on one’s french exam pass another exam omg Table 6: Examples of clustered activities (with manually provided labels, for reference purposes only). define the distance between any two vectors using cosine distance, i.e., 1 − A·B ||A||||B||, for vectors A and B. We use K-means clustering in order to find a set of kact clusters that can be used to represent the semantic space in which the activity vectors lie. We experiment with kact = 2n with n ∈Z ∩[3, 13] and evaluate the clustering results using several metrics that do not require supervision: within-cluster variance, silhouette coefficient (Rousseeuw, 1987), Calinski-Harabaz criterion (Cali´nski and Harabasz, 1974), and DaviesBouldin criterion (Davies and Bouldin, 1979). In practice, however, we find that these metrics are strongly correlated (either positively or negatively) with the kact, making it difficult to quantitatively compare the results of using a different number of clusters, and we therefore make a decision based on a qualitative analysis of the clusters.6 For the purpose of making these kinds of 6We acknowledge that similar experiments could be run with different cluster assignments, and our preliminary experiments showed comparable results. It is important to note that we do not treat these clusters as the definitive organization of human activities, but as an approximation of the full activity space in order to reduce the complexity of making predictions about activities in that space. 2576 Distance to “Cooking”: 0.11 cook breakfast cook the spaghetti start cooking cook something simple start cooking a lot more Distance to “Cooking”: 0.52 feed one’s ducks bread all the time give one’s dog some chicken stop eating meat eat hot dogs and fries get one’s dog addicted to marshmellows Distance to “Cooking”: 0.99 take a picture with her post a photo of one bring something like 1000 rolls of film draw a picture of us holding hands capture every magical moment to give to the bride Table 7: Three sample clusters and their distances from the first cluster in Table 6, showing the closest cluster, a somewhat distant cluster, and a very distant cluster. predictions about clusters, it is beneficial to have a smaller number of larger clusters, but clusters that are too large are no longer meaningful since they contain sets of activities that are less strongly related to one another. In the end, we find that using 210 = 1024 clusters leads to a good balance between cluster size and specificity, and we use this configuration for our prediction experiments moving forward. Examples of activities that were assigned the same cluster label are shown in Table 6, and Table 7 illustrates the notion of distance within our newly defined semantic space of human activities. For example, two cooking-related clusters are near to one another, while a photography-related cluster is very distant from both. 3 Methodology Given a set of activity clusters and knowledge about the users who have reported to have participated in these activities, we explore the ability of machine learning models to make inferences about which activities are likely to be next performed by a user. Here we describe the supervised learning setup, evaluation, and neural architecture used for the prediction task. 3.1 Problem Statement We formulate our prediction problem as follows: for a given user, we would like to produce a probability distribution over all activity clusters such that: argmax ci∈C P(ci|h, p, a) = ct , where C is a set of activity clusters, h, p, and a are vectors that represent the user’s history, profile, and attributes, respectively, and ct is the target cluster. The target cluster is the cluster label of an activity cluster that contains an activity that is known to have been performed by the user. If a model is able to accurately predict the target cluster, then it is able to estimate the general type of activity that the user is likely to write about doing in the future given some set of information about the user and what they have written in the past. By also generating a probability distribution over the clusters, we can assign a likelihood that each user will write about performing each group of activities in the future. For example, such a model could predict the likelihood that a person will claim to engage in a “Cooking” activity or a “Pet/Animal related” activity. The ability to predict the exact activity cluster correctly is an extremely difficult task, and in fact, achieving that alone would be a less informative result than producing predictions about the likelihood of all clusters. Further, in our setup, we only have knowledge about a sample of activities that people actually have done. In reality, it is very likely that users have participated in activities that belong to a huge variety of clusters, regardless of which activities were actually reported on social media. Therefore, it should be sufficient for a model to give a relatively high probability to any activity that has been reported by a user, even if there is no report of the user having performed an activity from the cluster with the highest probability for that user. 3.2 Model Architecture As input to our activity prediction model, we use three major components: a user’s history, profile, and attributes. We represent a history as a sequence of documents, D, written by the user, that contain information about the kinds of activities that they have done. Let t = |D|, and each document in D is represented as a sequence of tokens. We experiment with two sources for D: all additional tweets written by a user, or only the additional activities contained in tweets written by a user, which is a direct subset of the text contained in the full set of tweets. A user’s profile is a single document, also 2577 Figure 1: Predictive model architecture. represented as a sequence of tokens. For each user, we populate the profile input using the plain text user description associated with their account, which often contains terms which express selfidentity such as “republican” or “athiest.” We represent the tokens in both the user’s history and profile with the pretrained 100dimensional GloVe-Twitter word embeddings (Pennington et al., 2014), and preprocess all text with the script included with these embeddings.7 Finally, our model allows the inclusion of any additional attributes that might be known or inferred in order to aid the prediction task, which can be passed to the model as a dima dimensional real-valued vector. For instance, we can use personal values as a set of attributes, as described in Section 3.3. We train a deep neural model, summarized in Figure 1, to take a user’s history, profile, and attributes, and output a probability distribution over the set of kact clusters of human activities, indicating the likelihood that the user has reported to have performed an activity in each cluster. There are four major components of our network: Document Encoder This is applied to each of the t documents in the history– either an activity phrase or a full tweet. For document i in D, it takes a sequence of token embeddings as input and produces a dimd dimensional vector, di as output. History Encoder This layer takes the sequence 7nlp.stanford.edu/projects/glove/preprocess-twitter.rb {d0, . . . , dt} as input and produces a single dimH dimensional vector, h, as output, intended to represent high-level features extracted from the entire history of the user. Profile Encoder Takes each token in the user’s profile as input and produces a single dimp dimensional vector, p as output. Classifier As input, this module takes the concatenation a ⊕h ⊕p, where a is the predefined attribute vector associated with the user. Then, a prediction is made for each of the kact clusters, first applying softmax in order to obtain a probability distribution. We refer to the dimension of the output as dimo. For any of the three encoder layers, several layer types can be used, including recurrent, convolutional, or self-attention based (Vaswani et al., 2017) layers. The classifier layer is the only layer that does not take a sequence as input and we implement it using a simple feed-forward multilayer network containing ℓc layers with hc hidden units each. The network is trained with crossentropy loss, which has been shown to perform competitively when optimizing for top-k classification tasks (Berrada et al., 2018). 3.3 Incorporating Personal Values While the attributes vector a can be used to encode any information of interest about a user, we choose to experiment with the use of personal values because of their theoretical connection to human activities (Bardi and Schwartz, 2003). In order to get a representation of a user’s values, we turn to the hierarchical personal values lexicon from (Wilson et al., 2018). In this lexicon, there are 50 value dimensions, represented as sets of words and phrases that characterize that value. Since users’ profiles often contain value-related content, we use the Distributed Dictionary Representations (DDR) method (Garten et al., 2018) to compute a score, sv for each value dimension, v, using cosine similarity as follows: sv = R(profile) · R(lexiconv) ||R(profile)||||R(lexiconv)|| , where R(·) is a representation of a set of vectors, which, for the DDR method, is defined as the mean vector of the set; profile is a set of word embeddings, one for each token in the user’s profile; and lexiconv is another set of word embeddings, one for each token in the lexicon for value 2578 dimension v. Finally, we set a = (s0, . . . , sdimL) where dimL = 50, the number of value dimensions in the lexicon. Examples of profiles with high scores for sample value dimensions are shown in Table 8. Category Top Scoring Profile Family a mother to my son Nature Environment & nat resource economist tweeting about climate change/risk, energy, environmental protection, green finance, commodities, data science, politics Work-Ethic Football is like life - it requires perseverance, self-denial, hard work, sacrifice, dedication and respect for authority Religion /Galatians 2:20/ I love our Lord Jesus Christ. Table 8: Profiles scoring the highest for various values categories when measured with the values lexicon. Further, we explore the types of activity clusters that contain activities reported by users with high scores for various value dimensions. For a given value, we compute a score for each cluster sC v by taking the average sv of all users who tweeted about doing activities in the cluster. For each value v, we can then rank all clusters by their sC v score. Examples of those with the highest scores are presented in Table 9. We observe that users whose profiles had high scores for Family were likely to report doing activities including family members, those with high scores for Nature tweeted about travel, and those with high Work-Ethic scores reported performing writing related tasks. Category Activities in High Scoring Cluster give one’s daughter a number of plants Family take one’s family to the park work in the garden with mom visit another castle Nature visit france go on a fishing trip add another footnote to the dissertation Work-Ethic file a complaint with the fcc write one’s first novel by hand follow the rules Religion study really hard do a good deed Table 9: Activity clusters associated with the highest scoring users for various values categories when measured with the values lexicon. 3.4 Evaluation We evaluate our activity prediction models using a number of metrics that consider not only the most likely cluster, but also the set of keval most likely clusters. First, we evaluate the average per-class accuracy of the model’s ability to rank ct, the target cluster, within the top keval clusters. These scores tell us how well the model is able to make predictions about the kinds of activities that each user is likely to do. Second, we test how well the model is able to sort users by their likelihood of having reported to do an activity from a cluster. This average comparison rank (ACR) score is computed as follows: for each user in the test set, we sample n other users who do not have the same activity label. Then, we use the probabilities assigned by the model to rank all n + 1 users8 by their likelihood of being assigned ct, and the comparison rank score is the percentage of users who were ranked ahead of the target user (lower is better). We then average this comparison rank across all users in the test set to get the ACR. The ACR score tells us how well the model is able to find a rank users based on their likelihood of writing about doing a given activity, which could be useful for finding, e.g., the users who are most likely to claim that they “purchased some pants” or least likely to mention that they “went to the gym” in the future. 4 Experiments and Results We split our data at the user-level, and from our set of valid users we use 200,000 instances for training data, 10,000 as test data, and the rest as our validation set. For the document encoder and profile encoder we use Bi-LSTMs with max pooling (Conneau et al., 2017), with dimd = 128 and dimp = 128. For the history encoder, we empirically found that single mean pooling layer over the set of all document embeddings outperformed other more complicated architectures, and so that is what we use in our experiments. Finally, the classifier is a 3-layer feed-forward network with and dimc = 512 for the hidden layers, followed by a softmax over the dimo-dimensional output. We use Adam (Kingma and Ba, 2014) as our optimizer, set the maximum number of epochs to 100, and shuffle the order of the training data at each epoch. During each train8We set n = 999 in this study to achieve comparison samples of size 1000. 2579 keval 1 2 3 5 10 25 ACR fullT 2.54 5.04 7.01 13.14 24.49 55.36 46.22 −a 2.11 5.05 7.91 13.58 23.29 54.85 46.12 −p 3.20 6.47 9.08 14.70 27.52 60.26 42.24 −a, p 4.29 7.76 10.67 15.92 29.12 61.03 41.51 fullA 2.13 4.46 7.12 11.44 22.49 55.05 47.40 −a 2.60 4.55 7.35 12.26 23.37 54.73 46.17 −p 2.75 4.84 7.56 12.00 25.25 55.36 46.23 −a, p 3.75 6.79 9.73 15.47 28.22 60.87 42.70 −h 2.02 4.13 6.67 11.61 23.43 53.38 47.98 −a, h 1.68 4.55 7.61 11.49 23.41 52.97 47.83 −p, h 2.29 3.61 4.88 9.22 20.48 51.25 49.28 rand 2.00 4.00 6.00 10.00 20.00 50.00 50.00 Table 10: Per-class accuracy (%) @ keval and ACR scores for the 50-class prediction task. Note that removing h from either fullT or fullA gives the same model. For ACR only, lower is better. ing step, we represent each user’s history as a new random sample of max sample docs = 100 documents9 if there are more than max sample docs documents available for the user, and we use a batch size of 32 users. Since there is a class imbalance in our data, we use sample weighting in order to prevent the model from converging to a solution that simply predicts the most common classes present in the training data. Each sample is weighted according to its class, c, using the following formula: wc = N count(c) ∗dimo where count(c) is the number of training instances belonging to class c. We evaluate our model on the development data after each epoch and save the model with the highest per-class accuracy. Finally, we compute the results on the test data using this model, and report these results. We test several configurations of our model. We use the complete model described in section 3.2 using either the set of additional tweets written by a user as their history (fullT), or only the set of additional activities contained in those tweets (fullA). Then, to test the effect of the various model components, we systematically ablate the attributes vector input a, the profile text (and subsequently, the Profile Encoder layer) p, and the set of documents, D, comprising the history along with the Document and History Encoders, thereby removing the h vector as input to the classifier. We also explore removing pairs of these inputs at the same time. To contextualize the results, we also 9We empirically found that increasing this value beyond 100 had little effect on the development accuracy. include the theoretical scores achieved by random guessing, labeled as rand.10 We consider two variations on our dataset: the first is a simplified, 50-class classification problem. We choose the 50 most common clusters out of our full set of kact = 1024 and only make predictions about users who have reportedly performed an activity in one of these clusters. The second variation uses the entire dataset, but rather than making predictions about all kact classes, we only make fine-grained predictions about those classes for which count(c) ≥minCount. We do this under the assumption that training an adequate classifier for a given class requires at least minCount examples. All classes for which count(c) < minCount are assigned an “other” label. In this way, we still make a prediction for every instance in the dataset, but we avoid allowing the model to try to fit to a huge landscape of outputs when the training data for some of these outputs is insufficient. By setting minCount to 100, we are left with 805 out of 1024 classes, and an 806th “other” class for our 806-class setup. Note that this version includes all activities from all 1024 clusters, it is just that the smallest clusters are grouped together with the “other” label. While our models are able to make predictions indicating that learning has taken place, it is clear that this prediction task is difficult. In the 50-class setup, the fullT −a, p model consistently had the strongest average per-class accuracy for all values of keval and the lowest (best) ACR score (Table 10). The fullA −a, p model performed nearly as well, showing that using only the human-activity 10For the evaluation metrics considered in this paper, random guessing is as strong or stronger than a “most frequent class” baseline, so we do not report it. 2580 keval 1 2 3 5 10 25 50 75 100 200 300 ACR fullT 0.15 0.36 0.61 0.97 1.91 4.65 8.66 12.24 16.15 30.69 43.96 44.10 −a 0.32 0.61 0.98 1.39 2.96 5.99 10.21 14.61 18.95 35.19 49.26 42.61 −p 0.45 1.02 1.37 1.96 3.38 7.41 12.71 17.17 21.60 37.53 51.11 41.14 −a, p 0.41 0.70 1.10 1.66 3.03 6.88 12.89 17.86 22.76 38.61 52.38 40.82 fullA 0.29 0.41 0.72 1.04 2.05 4.50 8.50 12.14 15.48 30.04 44.24 45.98 −a 0.24 0.44 0.75 1.02 2.02 4.62 8.70 12.19 15.56 30.18 43.34 45.99 −p 0.23 0.46 0.66 1.13 2.29 5.27 9.66 14.33 18.75 34.00 47.71 42.64 −a, p 0.26 0.47 0.83 1.35 2.24 4.61 8.90 13.24 16.80 31.29 45.11 44.56 −h 0.10 0.28 0.44 0.73 1.37 4.08 7.60 10.96 14.28 27.60 40.77 47.94 −a, h 0.10 0.36 0.53 1.00 1.85 4.64 8.58 12.57 16.23 29.31 41.57 46.94 −p, h 0.10 0.23 0.41 0.68 1.49 3.72 7.12 10.46 13.65 26.90 39.93 48.15 rand 0.12 0.25 0.37 0.62 1.24 2.98 6.34 9.19 12.54 26.21 36.77 50.00 Table 11: Per-class accuracy (%) @ keval and ACR scores for the 806-class prediction task. Note that removing h from either fullT or fullA gives the same model. For ACR only, lower is better. relevant content from a user’s history gives similar results to using the full set of content available. When including the attributes and profile for a user, the model typically overfits quickly and generalization deteriorates. In the 806-class version of the task, we observe the effects of including a larger range of activities, including many that do not appear as often as others in the training data (Table 11). This version of the task also simulates a more realistic scenario, since predictions can be made for the “other” class when the model does to expect the user to claim to do an activity from any of the known clusters. In this setting, we see that the fullT −p model works well for keval ≤25, suggesting that the use of the attribute vectors helps, especially when predicting the correct cluster within the top 25 is important. For keval ≥50, the same fullT −a, p model that worked best in the 50-class setup again outperforms the others. Here, in contrast to the 50-class setting, using the full set of tweets usually performs better than focusing only on the human activity content. Interestingly, the best ACR scores are even lower in the 806-class setup, showing that it is just as easy to rank users by their likelihood of writing about an activity, even when considering many more activity clusters. 5 Conclusions In this paper, we addressed the task of predicting human activities from user-generated content. We collected a large Twitter dataset consisting of posts from more than 200,000 users mentioning at least one of the nearly 30,000 everyday activities that we explored. Using sentence embedding models, we projected activity instances into a vector space and perform clustering in order to learn about the high-level groups of behaviors that are commonly mentioned online. We trained predictive models to make inferences about the likelihood that a user had reported to have done activities across the range of clusters that we discovered, and found that these models were able to achieve results significantly higher than random guessing baselines for the metrics that we consider. While the overall prediction scores are not very high, the models that we trained do show that they are able to generalize findings from one set of users to another. This is evidence that the task is feasible, but very difficult, and it could benefit from further investigation. We make the activity clusters, models, and code for the prediction task available at http://lit.eecs.umich.edu/downloads.html Acknowledgments This research was supported in part through computational resources and services provided by the Advanced Research Computing at the University of Michigan. This material is based in part upon work supported by the Michigan Institute for Data Science, by the National Science Foundation (grant #1815291), by the John Templeton Foundation (grant #61156), and by DARPA (grant #HR001117S0026-AIDA-FP-045). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science, the National Science Foundation, the John Templeton Foundation, or DARPA. Many thanks to the anonymous reviewers who provided helpful feedback. 2581 References Harsh Agrawal, Arjun Chandrasekaran, Dhruv Batra, Devi Parikh, and Mohit Bansal. 2016. Sort story: Sorting jumbled images and captions into stories. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 925–931. Icek Ajzen. 1987. Attitudes, traits, and actions: Dispositional prediction of behavior in personality and social psychology. In Advances in experimental social psychology, volume 20, pages 1–63. Elsevier. Farzindar Atefeh and Wael Khreich. 2015. A survey of techniques for event detection in twitter. Computational Intelligence, 31(1):132–164. Anat Bardi and Shalom H Schwartz. 2003. Values and behavior: Strength and structure of relations. Personality and social psychology bulletin, 29(10):1207–1220. Leonard Berrada, Andrew Zisserman, and M Pawan Kumar. 2018. Smooth loss functions for deep top-k classification. arXiv preprint arXiv:1802.07595. Claude Bouchard, Steven N Blair, and William L Haskell. 2018. Physical activity and health. Human Kinetics. Tadeusz Cali´nski and Jerzy Harabasz. 1974. A dendrite method for cluster analysis. Communications in Statistics-theory and Methods, 3(1):1–27. Guangchun Cheng, Yiwen Wan, Abdullah N Saudagar, Kamesh Namuduri, and Bill P Buckles. 2015. Advances in human action recognition: A survey. arXiv preprint arXiv:1501.05964. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364. David L Davies and Donald W Bouldin. 1979. A cluster separation measure. IEEE transactions on pattern analysis and machine intelligence, (2):224– 227. Ethan Fast, William McGrath, Pranav Rajpurkar, and Michael S Bernstein. 2016. Augur: Mining human behaviors from fiction to power interactive systems. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 237–247. ACM. Justin Garten, Joe Hoover, Kate M Johnson, Reihane Boghrati, Carol Iskiwitch, and Morteza Dehghani. 2018. Dictionaries and distributions: Combining expert knowledge and large scale textual data content analysis. Behavior research methods, 50(1):344–361. Jeremy Goecks and Jude Shavlik. 2000. Learning users’ interests by unobtrusively observing their normal behavior. In Proceedings of the 5th international conference on Intelligent user interfaces, pages 129–132. ACM. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Judith A Ouellette and Wendy Wood. 1998. Habit and intention in everyday life: The multiple processes by which past behavior predicts future behavior. Psychological bulletin, 124(1):54. Sampo V Paunonen and Michael C Ashton. 2001. Big five factors and facets and the prediction of behavior. Journal of personality and social psychology, 81(3):524. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2mind: Commonsense inference on events, intents, and reactions. In ACL. Milton Rokeach. 1973. The nature of human values. Free press. Peter J Rousseeuw. 1987. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20:53–65. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Rui Wang, Gabriella Harari, Peilin Hao, Xia Zhou, and Andrew T Campbell. 2015. Smartgpa: how smartphones can assess and predict academic performance of college students. In Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing, pages 295–306. ACM. Steven R Wilson and Rada Mihalcea. 2017. Measuring semantic relations between human activities. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 664–673. Steven R Wilson, Yiting Shen, and Rada Mihalcea. 2018. Building and validating hierarchical lexicons with a case study on personal values. In International Conference on Social Informatics, pages 455– 470. Springer. 2582 Hongzhi Yin, Bin Cui, Ling Chen, Zhiting Hu, and Zi Huang. 2014. A temporal context-aware model for user behavior modeling in social media systems. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 1543–1554. ACM. Li Zhang, Steven R. Wilson, and Rada Mihalcea. 2018. Direct network transfer: Transfer learning of sentence embeddings for semantic similarity. CoRR, abs/1804.07835. Shugang Zhang, Zhiqiang Wei, Jie Nie, Lei Huang, Shuang Wang, and Zhen Li. 2017. A review on human activity recognition using vision-based method. Journal of healthcare engineering, 2017.
2019
245
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2583–2593 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2583 You Write Like You Eat: Stylistic variation as a predictor of social stratification Angelo Basile Symanto Research Nürnberg, Germany [email protected] Albert Gatt University of Malta Msida, Malta [email protected] Malvina Nissim University of Groningen Groningen, The Netherlands [email protected] Abstract Inspired by Labov’s seminal work on stylistic variation as a function of social stratification, we develop and compare neural models that predict a person’s presumed socio-economic status, obtained through distant supervision, from their writing style on social media. The focus of our work is on identifying the most important stylistic parameters to predict socioeconomic group. In particular, we show the effectiveness of morpho-syntactic features as stylistic predictors of socio-economic group, in contrast to lexical features, which are good predictors of topic. 1 Introduction In 1966, linguist William Labov set out to corroborate experimentally his observation that in New York City, variation in the pronunciation of postvocalic [r] (as in "car", "for", "pour") is subject to social stratification that is, that NYC people with different socio-economic backgrounds will realise that phoneme in different ways (Labov, 1966, 2006). Avoiding artificially elicited language in favour of spontaneous language use, Labov picked three large department stores from the top, middle, and bottom of the price/prestige range, under the assumption that customers (and salespersons) of these establishments would belong to different social strata. "[Labov’s study] was designed to test two ideas [. . . ]: first, that the variable (r) is a social differentiator in all levels of New York City speech; and second, that casual and anonymous speech events could be used as the basis for a systematic study of language." (Labov, 2006, p. 40. Italics ours.) Inspired by Labov’s work and the recent surge of interest in computational social science (CioffiRevilla, 2016) and computational sociolinguistics (e.g. Johannsen et al., 2015), we set out to investigate whether and to what extent variations in writing style, analysed in terms of several linguistic variables, are influenced by socio-economic status (RQ1; see below). To do so, we use user-generated restaurant reviews on social media. User-generated content bears important similarities to Labov’s "casual and anonymous speech events" on at least two fronts: 1) anonymity is here still preserved since we are not including personal information about the authors; furthermore 2) social media are now recognised in the literature as a source of naturally (i.e. casual) occurring text that can be used to investigate various sociolinguistic phenomena (Herda˘gdelen, 2013; Pavalanathan and Eisenstein, 2015). Labov’s use of the prestige of a store as a proxy for the social class of its customers and employees could be seen as a precursor of distant supervision, an approach which we employ in this study. We leverage online restaurant reviews, and our assumption for acquiring labels is that the socio-economic group of a restaurant’s patrons is in some measure predictable from its price range. Using this data, we seek to address the following research questions: (a) To what extent can socio-economic status be predicted from a person’s text (RQ1); (b) Can socio-economic groups be differentiated on the basis of syntactic features, compared to lexical features (RQ2)? Contributions Our contribution consists of 1) a silver dataset containing user-generated reviews labelled with a (distantly obtained) approximation of the socio-economic status of their author, based on the price range of restaurants; 2) a neural model of stylistic variation that can predict socioeconomic status with good performance, and 3) an account of the most important features of style that are predictive of socio-economic status in this domain. Our work can be viewed as a contemporary take on Labov’s approach, with hundreds of subjects instead of only a few, and with a much 2584 larger range of proxies for socio-economic grouping, exploiting user-generated content as a natural communicative setting in which stylistic parameters can be sourced to study variation. To favour reproducibility and future work, we make all code available at https://github. com/anbasile/social-variation.1 2 Data and Labels To work on our questions we need user-generated texts, and a proxy to facilitate distant labelling of an author’s socio-economic status. Reviews are ideal sources of user-generated content: they are not too noisy and are of sufficient length to enable paralinguistic and stylistic parameters to be identified. Restaurant reviews also carry information about the restaurants themselves, especially their price range, which we can use as proxy (see below). We use the Yelp! Dataset: it is released twice a year from Yelp!, a social network where users discuss and review businesses like restaurants, plumbers, bars, etc.2 The review corpus contains more than 5 million documents, from over 1 million authors, with a Zipfian distribution: a small number of authors publish most of the reviews, while most of the authors only leave one review. Grouping reviews per author and filtering out authors with only one review reduces the final dataset to fewer than a thousand authors, though this set of reviews is large and allows us to infer demographic information about the reviewers (see also Hovy et al., 2015). Language The Yelp! dataset contains reviews written in multiple languages, though the vast majority are in English. We use langid.py (Lui and Baldwin, 2012) to automatically detect and filter out non-English instances. The need for both good parsing performance and large quantity of text limits us from working with data from other languages. Price range as proxy To annotate the Yelp! dataset with labels which denote the social class of the authors we adopt the paradigm of distant supervision. We take the price range of the restau1The repository contains all code and models which can be run by acquiring the freely available Yelp dataset. 2This data is released within the context of the Yelp! Challenge, a multi-domain shared task which has attracted attention in NLP primarily for benchmarking text classification (e.g. Yang et al., 2016)). We use the dataset released for Round 11. rant as a proxy for socio-economic status. The average price of a meal in a restaurant is encoded by four labels: $, $$, $$$, $$$$. As a first, coarse step, we accept this representation and divide our population into four groups. We group all of the reviews per author and represent each author as a vector, where each element is the price range of a restaurant reviewed by the user. We compute the mode of this vector and the resulting value becomes our silver label. In short, we use the price label of a restaurant as an indicator of the socio-economic group(s) to which its patrons belong, under the assumption that the pricerange of the most visited venue will be the most indicative of the socio-economic status of a given reviewer. Figure 1 illustrates the process. X Y review 1 review 2 ... review N $ $$ ... $$ $$ most frequent $ of reviewed restaurant Figure 1: An illustration of the distant supervision process. Reviews from a single author are grouped together, the price range of the visited restaurants are collected and the most frequent value is assigned as label to the user. Our goal is predicting the assigned label Y from the text X. This coarse representation must undergo further refinement, to satisfy three requirements: (a) Label reliability: we want the most representative users only, that is, only those users whose restaurant price-range falls consistently within a restricted set of categories; (b) Sufficient textual evidence: we want as much text as possible in general, and the highest possible number of reviews per user; (c) Balance: the raw data is highly skewed towards class $$ (Figure 2), but for our experiments we want equally represented classes to avoid any size-related effects. In order to address (a), we employ an entropybased strategy to filter out noisier data points. This 2585 $$ $ $$$ $$$$ Figure 2: Author distribution before filtering. While users belonging to class $$$$ might visit cheaper places, the same is not true in the opposite direction: this explains the small size of class $$$$. is described below. For the size- and balancerelated points (b) and (c), we perform two operations over the entropy-filtered dataset. First, we require a minimum number of reviews per author to ensure sufficient evidence per reviewer without excluding too many instances; we empirically set this threshold to nine reviews. Second, we downsample the larger classes to the size of the smallest class. Entropy-based refinement Table 1 shows two data points for two instances (reviewers a and b): both consist of 16 reviews and both got assigned class 2 (i.e. $$) as a label, since 2 is the class of the restaurant that both authors visited most. However, as can be seen from the column labels, the first reviewer visited restaurants belonging to all four classes, while the second one only visited restaurants of class 2: the second reviewer is clearly a less noisy data point. user labels y entropy a {2: 5, 4: 4, 1: 3, 3: 4} 2 1.37 b {2: 16} 2 0.00 Table 1: Two equal-sized samples, both in group 2. The column labels contains the number of reviews per class. To maximise the ‘purity’ or consistency of reviews associated with each author, we compute the entropy over the label vector: the lower the entropy, the less noisy the reviewer and the more reliable the assigned label (y). In practice, we filter out the authors whose entropy score is above the mean of the whole dataset, estimated after removing authors with one review only. Table 2 shows the final label and token distribution, after filtering and downsampling. In Figure 3, we show two sample reviews, one from class $ and one from class $$$$. class authors tokens $ 138 10685 $$ 138 11874 $$$ 138 14872 $$$$ 138 16595 Table 2: Dataset overview after label filtering 3 Label validation: Readability Scores While distant supervision allows the inference of socio-economic status with minimal manual intervention, it also makes interpretation of results challenging due to the threat of circularity involved in the process of collecting data and modelling it at the same time. Thus, We sought some external label validation that would further ensure the soundness of our labels (and thus our strategy). Flekova et al. (2016) showed that the readability of a text correlates with income: the higher the readability, the higher the income. This is also consistent with observations that readability correlates with educational level (Davenport and DeLine, 2014), which in itself plays a role in determining a person’s socio-economic profile (Bourdieu, 2013). Assuming that our labels signal a person’s income bracket, we test whether they correlate with readability scores, which would provide external validation of our distant labelling strategy. We follow Flekova et al. (2016) and use a battery of readability metrics: Automated Readability Index, Coleman Liau Index, Dale-Chall Score, Flesch-Kincaid Ease, Gunning Fog score, Linsear Write Formula and the Lix index.3 The metrics differ in how they measure readability, but they all rely on features such as average number of syllables per sentence, average sentence length, or the percentage of arbitrarily defined complex words in the text. We expect average readability to increase across groups from group 1 ($) to group 4 ($$$$) for all metrics except the Flesch-Reading score, where the metric’s definition leads us to expect an 3We use the implementation of these functions contained in the textstat python library: https://github.com/ shivam5992/textstat. 2586 CLASS $ CLASS $$$$ So freaking good. That’s all I’m gonna say. Don’t believe me? Walk into the place and smell it. [. . . ] Will definitely go back.,Fresh, hand-made pepperoni rolls. . . .. oh yeah. Their cheesy focattia (did I spell that right?) is amazing. Take it home, throw it in the oven, drizzle a little EVOO on top and you’re golden. Friendly people there. Parking sucks, but I’m not taking off a point for that! Their marinara is dee-lish,Super tasty!!! Let me start off saying that 2 years ago my husband and I had a spectacular dinner at L’Atelier by Joel Robuchon and finally got the "Time" to visit Joel Robuchon.We got a limo service and a nice tour inside the mansion of Robuchon which was very memorable and the hostess escorted us to the dining area. Decore: In comparison to L’Atelier this place was much more chic and elegant. However, I still loved the idea to see all the chefs preparing and decorating my plates at L’Atelier. Figure 3: Sample reviews for classes $ and $$$$. inverse correlation (Flesch, 1943). As shown in Table 3, with the exception of Linsear, the correlations go in the predicted direction: average readability score for group K is always higher when compared to group K-1. A Kruskal-Wallis test confirms that differences between groups are significant at p < 0.001. Metrics $ $$ $$$ $$$$ ARI 6.48 6.52 6.59 6.91 Coleman-Liau 7.58 7.76 8.07 8.41 Dale-Chall 6.65 6.76 6.94 7.00 Flesch-Kincaid 5.42 5.55 5.59 5.82 Flesch-Reading 81.06 79.93 79.10 77.39 Gunning-Fog 13.46 13.70 14.08 14.23 Linsear 6.00 5.80 5.83 5.72 Lix 30.70 31.39 31.69 32.71 Table 3: The mean readability scores per group: the boldface metric is the only one whose results are not predicted by our hypothesis. 4 Task definition and rationale The prediction of socio-economic status from text can be viewed as a new dimension in the task of author profiling. Due to the nature of the labels (ranging across four classes related to increasing price), this could be seen as an ordinal regression problem. However, following standard practice within the author profiling literature (Rangel Pardo et al., 2015; Rangel et al., 2016), especially regarding modelling age (where real values are binned into discrete classes), we treat this as a classification task. This approach results in a more conservative evaluation strategy (since at test time, a class is evaluated as either accurate or not). In an ordinal setting, one could weight classifier output by its proximity to the target class (e.g. $is closer to $$than to $$$). Given the novelty of our task and data, where evaluation benchmarks and settings are not yet available, we deem the more conservative strategy as the most appropriate one. Given a (collection of) review(s), the task is thus to predict the socio-economic status of its author, assigning one of four classes {$,$$,$$$,$$$$}. First we run a lexicon-based sparse model (the lexical baseline) which we take as a strong baseline (Section 5). Subsequently, we run a battery of dense models experimenting with a variety of abstractions over the lexicon (Section 6). Given the relative novelty of the task, we consider model performance as secondary to the broader scientific goal of identifying which features are determinants of variation as a function of socio-economic group. Thus, we focus on models that use different features, at increasing removes from lexical or topic-based information, seeking to identify the main parameters of variation. 5 Lexical baseline model Our baseline uses an ‘open vocabulary’ approach (Schwartz et al., 2013), a bag-of-word (BOW) representation of the text including all the words in the corpus, resulting in a vocabulary of 15858 items. We extract (3-6) word and character ngrams; no pre-processing is applied. We feed these features to a Logistic Regression model, which has the advantage of being highly interpretable, allowing us to investigate to what extent the model relies on topic words. Using the Scikit-learn implementation (Pedregosa et al., 2011), we train the model on 80% of the data, and test it on the remaining 20%. With an F1 of 0.53, the performance of our lexical baseline is well above a random baseline (F1 = 0.25). 2587 Analysis The scores of this simple model are most likely influenced by topic. While successful, a system assigning high weights to features strongly associated with cheap/expensive food, will limit the scope of our conclusions on stylistic variation. In other words, the features identified are more related to the restaurants themselves than to the writing characteristics of their authors. In Table 4 we report the most important features (words) per class. $ $$ $$$ $$$$ fast tried at excellent kids happy clubs gras coffee staff wynn we customer won music las clean put pretty steak they phoenix night tasting order find club foie came try vegas wine always place buffet course pizza salsa hotel vega Table 4: The 10 most important word features per class. We omit character-level (ngram) features to facilitate interpretability. The output can be easily interpreted. In the least expensive class, we find words like coffee and pizza. The second class is noisier, as the model appears to capture aspects of the reviews related to service rather than food. The two most expensive classes confirm our hypothesis since we find words like Vegas, Wynn (a casino in Las Vegas, USA), [foie-?]-gras, wine and steak. What we observe from this feature analysis is that by relying on words we are capturing aspects of restaurants, to the detriment of a properly stylistic account, whose features would be more authorthan topic-oriented. Capturing author-related stylstic features requires an abstraction away from the lexicon (though not necesssarily from non-content based featues of the lexicon, such as word length or structure). This might yield lower performance, but our main goal is to understand the role played by morpho-syntactic and other non-lexical dimensions of social variation, rather than achieving the highest possible score in classifying reviews. 6 Capturing Style Style and variation can be found at different levels of linguistic abstraction (Eckert and Rickford, 2001). We experiment with a selection of features carefully tailored to capture different aspects of the phenomenon; each feature serves as a representation to be fed to a classifier. First, we preserve the surface structure but get rid of most lexical information, using the bleaching approach proposed by van der Goot et al. (2018) (Section 6.1). Second, we remove words and replace them with POS tags, so as to cancel out topic information entirely (Section 6.2). In the final representation, we use dependency trees and expand the POS tags into triplets to investigate syntactic variation (Section 6.3). In order to properly model the structural information encoded in these non-lexical feature representations, we use a Convolutional Neural Network (CNN) classifier (LeCun et al., 1995), rather than rely on sparse models as we did for our lexical baseline.4 The model consists of a single convolutional layer coupled with a sum-pooling operation; a Multi-Layer Perceptron on top improves discrimination performance between classes. We use the Adam optimizer (Kingma and Ba, 2015) with a fixed learning rate (0.001) and L2 regularization (Ng, 2004); a dropout layer (0.2) (Srivastava et al., 2014) helps to prevent overfitting. For the implementation we rely on spaCy (Honnibal and Johnson, 2015). 6.1 Bleached representation Recently, van der Goot et al. (2018) introduced a language-independent representation termed bleaching for capturing gender differences in writing style, while abstracting away from lexical information. Bleaching preserves surface information while obfuscating lexical content. This allows a focus on lexical variation as a function of personal style, while reducing the possible influence of topic as a determining factor. We experiment with this idea under the assumption that authors belonging to different groups will show a difference in the formality of their writing, and that a bleached representation is well suited for capturing such a difference. In particular, we hypothesise that some of our 4Although the aim of this paper is not a comparison between sparse and dense models over different representations, we provide all scores for all models in the appendix. 2588 target classes are typified by certain writing styles which differ in their formality and the extent to which they approach informal speech. Thus, we aim to capture the difference between a plainer writing style, with few or no interjections, without abbreviations and/or emojis; and a writing style which more closely approximates speech, making substantial use of exclamation marks and emojis for emphasis, abbreviations, possibly incorrect spelling of words to approximate phonetic form and broad use of direct speech. As an example, the following is a list of sentences taken from different classes of our dataset: $ – hand-made pepperoni rolls. . . .. oh yeah $$ – Their marinara is dee-lish,Super tasty!!! $$$ – When Jet first opened, I loved the place. $$$$ – compared to pierre gagnaire in paris, the food here is way less ambitious We note that orthography seems to differ significantly between these samples: the first two would more likely be viewed as typical web texts, while the last two show a more considered or premeditated writing style. token bleached representation I X_01_True_V_2117 really xxxxxx_06_True_CCVVCC_81 love xxxx_06_True_CVCVCC_15 pizza xxxxx_04_True_CCVC_617 ! !_01_False_!_21 Table 5: An example of how a sentence is rendered by the bleached representation. Table 5 shows some examples of the bleached representation under the abstraction we chose to experiment with, which are as follows. First, we extract the surface form of a word and render each character as either X or x, depending on whether it is capitalised or not. Second, we extract the length of each word prefixed with a 0 to avoid confusion with the frequency of the word (indicated by the number at the end of the bleached string). A boolean label signals whether the token is alphanumeric or not: this feature can be informative in capturing, for instance, the use of emojis. Finally, we approximate the original surface form by substituting all the English vowels with the letter V and all the English consonants with the letter C. 6.2 Morpho-syntax As a more definitive move away from lexical information, we label each word by its POS-tag, using spaCy (Honnibal and Johnson, 2015) and the universal tagset (Petrov et al., 2012). Within this experiment, we train our model using only such a representation, thus inhibiting topic-related features from becoming prominent. We assume that a good performance of the classifier under such conditions provides support for the existence of phenomena related to social variation at the morphosyntactic level. 6.3 Dependency trees Previous research on stylistic variation as a function of age and income shows an important difference in syntax use between groups (Flekova et al., 2016). However, this work reports results based on a shallow interpretation of syntax, i.e. the authors measure the ratio of POS tags in the text: such a strategy is dictated by the relatively poor performance of parsers on the domain investigated by Flekova et al. (2016), i.e. Twitter. Yelp! reviews are closer to canonical English, which allows us to obtain a full syntactic analysis of each document, adopting a strategy closer to that of Johannsen et al. (2015). We first parse our corpus using a pre-trained dependency parser, namely Honnibal and Johnson (2015)’s parser5, which achieves state-of-the-art accuracy on English. Figure 4 shows an example. Oh INTJ god INTJ , PUNCT I PRON really ADV love VERB pizza NOUN intj intj pun ct ns ub j a d v m o d d o b j Figure 4: An example of a parsed sentence using Universal Dependencies. Figure 5: An example of the syntactic feature representation. We then transform each word into a triplet that consists of: 1) the POS tag of the word, 2) the 5We use the largest pre-trained available model, en_core_web_lg. 2589 incoming arc and 3) the POS tag of the head, as shown in Figure 5. This is fed as feature to the classifier. Johannsen et al. (2015) use a ‘bag-ofrelations’ representation in combination with a χ2 test, discarding some structural information in order to ease comparison across languages: here, we rely on the performance of a sequence model (i.e. the CNN classifier) over the transformed dependency tree. As we do in Section 6.2, we assume that a good performance of the classifier points toward the existence of significant syntactic patterns between groups. 7 Evaluation We focus on the comparison of several models against one another and especially against the lexical baseline. This will let us single out which features, or which levels of abstraction (see Section 6), best model style when topic information is reduced or eliminated. For completeness, we also report on the results obtained by a CNN-based version of the LR lexical baseline from Section 5. In Table 6, we report results training our models on 80% of the data and testing them on the remaining 20%, using exactly the same split as for the simple lexical and random models (Section 5). Note that the results are averaged over two runs: we ran the CNN twice for each representation, since it is known that multiple runs of the same neural model on the same dataset can yield significantly different results due to underlying random processes (Reimers and Gurevych, 2017). model F1 random baseline 0.25 LR BOW (lexical) baseline 0.53 CNN lexical 0.54 CNN pos tags 0.33 CNN dependency tree 0.52 CNN bleaching 0.46 Table 6: F1-scores of the Logistic Regression (LR) and Convolutional Network (CNN) models on our dataset. As a general comment, from a class perspective, we observe that class 4 is the easiest to model, while class 2 is the most difficult, for all CNN models (see the confusion matrices in Figure 6). This complements the observation made earlier in relation to Table 4, where it was noted that class 2 is also noisier at the lexical level. Lexical This model serves as a comparison to the LR-based lexical baseline model, while also providing a CNN-based version of this model to ensure fair comparison of a lexical or topic-based strategy against other, non-lexical, CNN models. The lexical CNN achieves approximately the same results as the LR-based lexical baseline, with an overall F-score of 0.54. Bleaching Our CNN model trained on bleached representations shows the lowest performance, though still above random baseline.6 This suggests that abstract, word-level features do have some predictive value, but they do not capture enough lexical content to surpass a simple lexical model that classifies based on topic-based features. At the same time, this result also indicates that the shape of the lexical items used by authors (the outcome of bleaching) is a less reliable predictor of socio-economic status than certain morpho-syntactic properties. POS tags When using only POS information without words, we find that, as can be expected, performance drops (F = 0.33). From the confusion matrix reported in Figure 6, it appears once again that class 2 is the hardest class to predict. Dependency Trees As an abstraction strategy, this works best out of the three we have tried, and is competitive with the neural lexical model and the logistic regressor. As Figure 6 shows, the model is also predicting each of the four classes more consistently than the other two models. This suggests that we are able to leverage syntactic information as a predictor of social variation, echoing the findings of Johannsen et al. (2015) in a different sociolinguistic domain. Higher accuracy is also achieved without any topic bias, thus providing better evidence that we moved away from a model that predicts which restaurants are the topic of discussion, and moved closer to an account of authorial style. We believe these results provide a positive answer to our main research question (RQ1): to the extent that authors can be distantly grouped according to their socio-economic status, it is possible to differentiate among them on the basis of stylistic parameters. As for our other question, we find that the 6When this feature is used in the logistic regressor instead, it shows good performance. See the Appendix for details. 2590 (a) bleaching (b) POS (c) dependency trees Figure 6: Confusion matrices for the CNN models using bleached representations, POS, and dependency trees. two strongest predictors of our labels are lexical information on the one hand, and syntactic dependencies on the other. We attribute this to the fact that these models are ultimately classifying different things: a lexically-based model relies on topic and thus predicts the type of restaurant. A syntaxbased model is a better approximation to individual style. That these two models achieve very similar F1 scores (0.52 vs 0.54) can be attributed to the fact that filtering and downsampling created a more consistent dataset in which authors were consistently grouped in specific restaurant price ranges. These two models show that it is possible to differentiate among the resulting classes both on the basis of type of establishment (the lexical model) and on the basis of stylistic features in the writing style of its patrons (the syntactic model). 8 Related Work The idea that socio-economic status influences language use and is a determinant of language variation has been central to sociolinguistic theory for a long time (Bernstein, 1960; Labov, 1972, 2006). Labov’s work could be viewed as an early form of distant supervision, exploiting established categories (e.g. the price and status of establishments such as department stores) to draw inferences about variables related to social stratification. The work presented here takes inspiration from this paradigm, and contributes to the growing literature on distant supervision in NLP (Read, 2005), especially in social media (e.g. Plank et al., 2014; Pool and Nissim, 2016; Fang and Cohn, 2016; Basile et al., 2017; Klinger, 2017, inter alia). Computational work on style – i.e. linguistic features characteristic of an individual or group (Biber, 1988) – has focussed on demographic or personal variables, ranging from geographical location and dialect (Zampieri et al., 2014; Han et al., 2014; Eisenstein, 2013) to age and gender (Argamon et al., 2007; Newman et al., 2008; Sarawgi et al., 2011; Johannsen et al., 2015; Hovy and Søgaard, 2015), as well as personality (Argamon et al., 2005; Verhoeven et al., 2016; Youyou et al., 2015). An general overview of computational sociolinguistics can be found in Nguyen et al. (2016). By contrast, there has been relatively little work on socio-economic status. Flekova et al. (2016) show that textual features can predict income, demonstrating a relationship between this and age. Lampos et al. (2016) also report good results on inferring the socio-economic status of social media users from text. Like the present work, they use distant supervision, exploiting occupation information in Twitter profiles. Our work differs from these precedents in that we investigate a broader range of lexical, morphological and syntactic features in a novel domain. Previous work specifically on the language of food has also found that social media data can be used to validate sociological hypotheses, such as the importance of a specific meal in a certain geographical region (Fried et al., 2014). Somewhat closer to the present work, Jurafsky (2014) finds an interesting correlation between the price range of a restaurant and the lengths of food names on its menu. 9 Conclusion Inspired by Labov and encouraged by recent interest in computational sociolinguistics, we developed accurate neural models to predict socioeconomic status from text. While lexical information is highly predictive, it is restricted to topic. In contrast, syntactic information is almost as predictive and is a much better signal for stylistic varia2591 tion. From a methodological point of view, we can draw two conclusions from this work. First, as has been noted (Plank et al., 2016), neural networks can perform well with relatively small datasets, in this case proving competitive with the sparse models that are usually favoured in author profiling (Malmasi et al., 2017; Basile et al., 2018). Second, distant supervision with proxy labels for socio-economic status yields useful insights and is validated externally via readability scores. This is encouraging for further studies in computational social science in ecologically valid and relatively labour-free settings. Nevertheless, there are limitations of distant labelling and social media data — with issues related specifically to the language of food (Askalidis and Malthouse, 2016) — that we will take into account in future work. First, we wish to investigate the role of additional variables (such as age and gender). Second, we will take steps to mitigate the risk of fake reviews and validate the distant labelling with human annotation. Acknowledgements We would like to thank the three anonymous reviewers who helped us improve the quality of this paper. The first author’s contribution was made while at the Universities of Malta and Groningen as part of the Erasmus Mundus M.Sc. Program in Human Language Science and Technology. References Shlomo Argamon, Sushant Dhawle, Moshe Koppel, and James W Pennebaker. 2005. Lexical predictors of personality type. In Proceedings of the Joint Annual Meeting of the Interface and the Classification Societies of North America. Shlomo Argamon, Moshe Koppel, James Pennebaker, and Jonathan Schler. 2007. Mining the Blogosphere: Age, gender and the varieties of selfexpression. First Monday, 12(9). Georgios Askalidis and Edward C. Malthouse. 2016. Understanding and overcoming biases in customer reviews. CoRR, abs/1604.00417. Angelo Basile, Tommaso Caselli, and Malvina Nissim. 2017. Predicting Controversial News Using Facebook Reactions. In Proceedings of the Fourth Italian Conference on Computational Linguistics CLiCit. Angelo Basile, Gareth Dwyer, Maria Medvedeva, Josine Rawee, Hessel Haagsma, and Malvina Nissim. 2018. Simply the Best: Minimalist System Trumps Complex Models in Author Profiling. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 143– 156. Springer. Basil Bernstein. 1960. Language and Social Class. Journal of Sociology, 11(3):271–276. Douglas Biber. 1988. Variation Across Speech and Writing. Cambridge University Press, Cambridge. Pierre Bourdieu. 2013. Distinction: A social critique of the judgement of taste. Routledge. Claudio Cioffi-Revilla. 2016. Computational social science. Proceedings of the National Academy of Sciences of the United States of America, 113(3):468–470. James RA Davenport and Robert DeLine. 2014. The readability of tweets and their geographic correlation with education. arXiv preprint arXiv:1401.6058. Penelope Eckert and John R Rickford. 2001. Style and sociolinguistic variation. Cambridge University Press. Jacob Eisenstein. 2013. Phonological factors in social media writing. In Proceedings of the NAACLHLT 2013 Workshop on Language Analysis in Social Media, Atlanta, Georgia. Association for Computational Linguistics. Meng Fang and Trevor Cohn. 2016. Learning when to trust distant supervision: An application to lowresource POS tagging using cross-lingual projection. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 178–186. Lucie Flekova, Daniel Preo¸tiuc-Pietro, and Lyle Ungar. 2016. Exploring stylistic variation with age and income on twitter. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 313–319. Rudolf Flesch. 1943. Marks of Readable Style. Columbia University, New York. Daniel Fried, Mihai Surdeanu, Stephen Kobourov, Melanie Hingle, and Dane Bell. 2014. Analyzing the language of food on social media. In 2014 IEEE International Conference on Big Data (Big Data), pages 778–783. IEEE. Rob van der Goot, Nikola Ljubeši´c, Ian Matroos, Malvina Nissim, and Barbara Plank. 2018. Bleaching text: Abstract features for cross-lingual gender prediction. In Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. 2592 Bo Han, Paul Cook, and Timothy Baldwin. 2014. Textbased twitter user geolocation prediction. Journal of Artificial Intelligence Research, 49:451–500. Amaç Herda˘gdelen. 2013. Twitter N-Gram Corpus With Demographic Metadata. Language resources and evaluation, 47(4):1127–1147. Matthew Honnibal and Mark Johnson. 2015. An Improved Non-monotonic Transition System for Dependency Parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1373–1378, Lisbon, Portugal. Association for Computational Linguistics. Dirk Hovy, Anders Johannsen, and Anders Søgaard. 2015. User review sites as a resource for largescale sociolinguistic studies. In Proceedings of the 24th International Conference on World Wide Web, pages 452–461. International World Wide Web Conferences Steering Committee. Dirk Hovy and Anders Søgaard. 2015. Tagging Performance Correlates with Author Age. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL’15), pages 483–488. Anders Johannsen, Dirk Hovy, and Anders Søgaard. 2015. Cross-lingual syntactic variation over age and gender. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 103–112. Dan Jurafsky. 2014. The language of food: A linguist reads the menu. WW Norton & Company. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Roman Klinger. 2017. Does optical character recognition and caption generation improve emotion detection in microblog posts? In Proceedings of Natural Language Processing and Information Systems 2017 (NLDB’17), pages 313–319. William Labov. 1966. The social stratification of English in New York city. ERIC, Washington DC: Center for Applied Linguistics. William Labov. 1972. Language in the Inner City: Studies in the Black English Vernacular. University of Pennsylvania Press. William Labov. 2006. The social stratification of English in New York city. Cambridge University Press. Vasileios Lampos, Nikolaos Aletras, Jens K Geyti, Bin Zou, and Ingemar J Cox. 2016. Inferring the socioeconomic status of social media users based on behaviour and language. In European Conference on Information Retrieval, pages 689–695. Springer. Yann LeCun, Yoshua Bengio, et al. 1995. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995. Marco Lui and Timothy Baldwin. 2012. Langid.Py: An Off-the-shelf Language Identification Tool. In Proceedings of the ACL 2012 System Demonstrations, ACL ’12, pages 25–30. Association for Computational Linguistics. Shervin Malmasi, Keelan Evanini, Aoife Cahill, Joel Tetreault, Robert Pugh, Christopher Hamill, Diane Napolitano, and Yao Qian. 2017. A report on the 2017 native language identification shared task. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 62–75. Matthew L. Newman, Carla J. Groom, Lori D. Handelman, and James W. Pennebaker. 2008. Gender Differences in Language Use: An Analysis of 14,000 Text Samples. Discourse Processes, 45(3):211–236. Andrew Y Ng. 2004. Feature selection, L 1 vs. L 2 regularization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning, page 78. ACM. Dong Nguyen, A. Seza Dogruöz, Carolyn Penstein Rosé, and Franciska de Jong. 2016. Computational sociolinguistics: A survey. Computational Linguistics, 42:537–593. Umashanthi Pavalanathan and Jacob Eisenstein. 2015. Audience-Modulated Variation in Online Social Media. American Speech, 90(2):187–213. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825–2830. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A Universal Part-of-Speech Tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). Barbara Plank, Dirk Hovy, Ryan McDonald, and Anders Søgaard. 2014. Adapting taggers to Twitter with not-so-distant supervision. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1783–1792. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 412–418. 2593 Chris Pool and Malvina Nissim. 2016. Distant supervision for emotion detection using facebook reactions. In PEOPLES@COLING. Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. 2016. Overview of the 4th author profiling task at PAN 2016: cross-genre evaluations. In Working Notes Papers of the CLEF 2016 Evaluation Labs. CEUR Workshop Proceedings/Balog, Krisztian [edit.]; et al., pages 750–784. Francisco Manuel Rangel Pardo, Fabio Celli, Paolo Rosso, Martin Potthast, Benno Stein, and Walter Daelemans. 2015. Overview of the 3rd Author Profiling Task at PAN 2015. In CLEF 2015 Evaluation Labs and Workshop Working Notes Papers, pages 1– 8. Jonathon Read. 2005. Using Emoticons to Reduce Dependency in Machine Learning Techniques for Sentiment Classification. In Proceedings of the ACL Student Research Workshop, ACLstudent ’05, pages 43–48. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2017. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338–348. Ruchita Sarawgi, Kailash Gajulapalli, and Yejin Choi. 2011. Gender attribution: tracing stylometric evidence beyond topic and genre. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 78–86. Association for Computational Linguistics. H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin EP Seligman, et al. 2013. Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one, 8(9):e73791. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Ben Verhoeven, Walter Daelemans, and Barbara Plank. 2016. Twisty: a multilingual twitter stylometry corpus for gender and personality profiling. In Proceedings of the 10th Annual Conference on Language Resources and Evaluation (LREC 2016)/Calzolari, Nicoletta [edit.]; et al., pages 1–6. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Wu Youyou, Michal Kosinski, and David Stillwell. 2015. Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences, 112(4):1036–1040. Marcos Zampieri, Liling Tan, Nikola Ljubeši´c, and Jörg Tiedemann. 2014. A report on the DSL shared task 2014. In Proceedings of the first workshop on applying NLP tools to similar languages, varieties and dialects, pages 58–67. Appendix: Additional results model class precision recall f1-score lexical $ 0.53 0.68 0.59 $$ 0.37 0.25 0.30 $$$ 0.67 0.50 0.57 $$$$ 0.58 0.75 0.66 avg/total 0.54 0.54 0.53 abstract $ 0.61 0.71 0.66 $$ 0.50 0.32 0.39 $$$ 0.39 0.32 0.35 $$$$ 0.42 0.57 0.48 avg/total 0.48 0.48 0.47 POS-tags $ 0.27 0.43 0.33 $$ 0.15 0.07 0.10 $$$ 0.29 0.14 0.19 $$$$ 0.40 0.57 0.47 avg/total 0.28 0.30 0.27 dependency triplets $ 0.43 0.36 0.39 $$ 0.23 0.21 0.22 $$$ 0.21 0.25 0.23 $$$$ 0.37 0.39 0.38 avg/total 0.31 0.30 0.31 Table 7: Classification report for the sparse model using the different representations.
2019
246
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2594–2604 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2594 Encoding Social Information with Graph Convolutional Networks for Political Perspective Detection in News Media Chang Li Purdue University [email protected] Dan Goldwasser Purdue University [email protected] Abstract Identifying the political perspective shaping the way news events are discussed in the media is an important and challenging task. In this paper, we highlight the importance of contextualizing social information, capturing how this information is disseminated in social networks. We use Graph Convolutional Networks, a recently proposed neural architecture for representing relational information, to capture the documents’ social context. We show that social information can be used effectively as a source of distant supervision, and when direct supervision is available, even little social information can significantly improve performance. 1 Introduction Over the last decade we witness a dramatic change in the way information is generated and disseminated. Instead of a few dedicated sources that employ reporters and fact checkers to ensure the validity of the information they provide, social platforms now provide the means for any user to distribute their content, resulting in a sharp increase in the number of information outlets and articles covering news events. As a direct result of this process, the information provided is often shaped by their underlying perspectives, interests and ideologies. For example, consider the following two snippets discussing the comments made by a Democratic Senator regarding the recent U.S. government shutdown. thehill.com (Center) Sen. Mark Warner (D-Va.) on Sunday blasted President Trump for his “inept negotiation” to bring an end to the ongoing partial government shutdown. Warner, the ranking member of the Senate Intelligence Committee, lamented the effect the shutdown has had on hundreds of thousands of federal workers who have been furloughed or forced to work without pay. infowars.com (Right) Senator Mark Warner (D-Va.) is being called out on social media for his statement on the partial government shutdown. Warner blamed the “suffering” of federal workers and contractors on President Trump in a Sunday tweet framing Trump as an “inept negotiator”. Twitter users pointed out that Democrats are attending a Puerto Rican retreat with over 100 lobbyists and corporate executives. Despite the fact that both articles discuss the same event, they take very different perspectives. The first reporting directly about the comments made, while the second one focuses on negative reactions to these comments. Identifying the perspective difference and making it explicit can help strengthen trust in the newly-formed information landscape and ensure that all perspectives are represented. It can also help lay the foundation for the automatic detection of false content and rumors and help identify information echo-chambers in which only a single perspective is highlighted. Traditionally, identifying the author’s perspective is studied as a text-categorization problem (Greene and Resnik, 2009; Beigman Klebanov et al., 2010; Recasens et al., 2013; Iyyer et al., 2014; Johnson and Goldwasser, 2016), focusing on linguistic indicators of bias or issueframing phrases indicating their authors’ bias. These indicators can effectively capture bias in ideologically-charged texts, such as policy documents or political debates, which do not try to hide their political leaning and use a topic-focused vocabulary. Identifying the authors’ bias in news narratives can be more challenging. News articles, by their nature, cover a very large number of topics resulting in a diverse and dynamic vocabulary that is continuously updated as new events unfold. Furthermore, unlike purely political texts, news narratives attempt to maintain credibility and seem im2595 partial. As a result, bias is introduced in subtle ways, usually by emphasizing different aspects of the story. Our main insight in this paper is that the social context through which the information is propagated can be leveraged to alleviate the problem, by providing both a better representation for it, and when direct supervision is not available, a distantsupervision source based on information about users who endorse the textual content and spread it. Several recent works dealing with information dissemination analysis on social networks, focused on analyzing the interactions between news sources and users in social networks (Volkova et al., 2017; Glenski et al., 2018; Ribeiro et al., 2018). However, given the dynamic, and often adversarial setting of this domain, the true source of the news article might be hidden, unknown or masked by taking a different identity. Instead of analyzing the documents’ sources, our focus is to use social information, capturing how information is shared in the network, to help guide the text representation and provide additional support when making decisions over textual content. We construct a socially-infused textual representation, by embedding in a single space the news articles and the social circles in which these articles are shared so that the political biases associated with them can be predicted. Figure 1 describes these settings. The graph connects article nodes via activity-links to users nodes (share), and these users in turn are connected via social-links (follow) to politically affiliated users (e.g., the Republican or Democratic parties twitter accounts). We define an embedding objective capturing this information, by aligning the documents representation, based on content, with the representation of users who share these documents, based on their social relations. We use a recently proposed graph embedding framework, Graph Convolutional Networks (GCN) (Kipf and Welling, 2016, 2017) to capture these relationships. GCNs are neural nets operating on graphs, and similar to LSTMs on sequences, they create node embeddings based on the graph neighborhood of a given node. In the context of our problem, the embedding of a document takes into account the textual content, but also the social context of users who share it, and their relationships with other users with known political affiliations. We compare this powerful approach with traditional graph embedding methods that only capture local relationships between nodes. Given the difficulty of providing direct supervision in this highly dynamic domain, we study this problem both when direct supervision over the documents is available, and when using distantsupervision, in which the document level classification depends on propagating political tendencies through social network, which is often incomplete and provides conflicting information. To study these settings we focus on U.S. news coverage. Our corpus consists of over 10k articles, covering more than 2k different news events, about 94 different topics, taking place over a period of 8 years. We remove any information about the source of the article (both meta-data and in the text) and rely only on the text and the reactions to it on social media. To capture this information, we collected a set of 1.6k users who share the news articles on Twitter and a handful of politically-affiliated users followed by the sharing users, which provide the distant supervision. We cast the problem as a 3-class prediction problem, capturing left-leaning bias, right-leaning bias or no bias (center). Our experimental results demonstrate the strength of our approach. We compare direct text classification or node classification methods to our embedding-based approach in both the fully supervised and distant supervised settings, showing the importance of socially infused representations. Social Link (follow) Activity Link (share) Politically -Affiliated Sharing User Figure 1: Information Flow Graph 2596 2 Related Work The problem of perspective identification is typically studied as a supervised learning task (Lin et al., 2006; Greene and Resnik, 2009), in which a classifier is trained to differentiate between two specific perspectives. For example, the bitterlemons dataset consisting of 594 documents describing the Israeli and Palestinian perspectives. More recently, in SemEval-2019, a hyperpartisian news article detection task was suggested1. The current reported results on their dataset are comparable to ours, when using text information alone, demonstrating that it is indeed a challenging task. Other works use linguistic indicators of bias and expressions of implicit sentiment (Greene and Resnik, 2009; Recasens et al., 2013; Choi and Wiebe, 2014; Elfardy et al., 2015). In recent years several works looked at indications of framing bias in news articles (Baumer et al., 2015; Budak et al., 2016; Card et al., 2016; Field et al., 2018; Morstatter et al., 2018). We build on these work to help shape our text representation approach. Recent works looked at false content identification (Volkova et al., 2017; Patwari et al., 2017), including a recent challenge2 identifying the relationship between an article’s title and its body. Unlike these, we do not assume the content is false, instead we ask if it reflects a different perspective. Using social information when learning text representations was studied in the context of graph embedding (Pan et al., 2016), extending traditional approaches that rely on graph relations alone (Perozzi et al., 2014; Tang et al., 2015; Grover and Leskovec, 2016) and information extraction and sentiment tasks (Yang et al., 2016a; West et al., 2014). In this work we focus on GCNs (Kipf and Welling, 2017; Schlichtkrull et al., 2018), a recent framework for representing relational data, that adapts the idea of convolutional networks to graphs. Distant supervision for NLP tasks typically relies on using knowledge-bases (Mintz et al., 2009), unlike our setting that uses social information. Using user activity and known user biases was explored in (Zhou et al., 2011), our settings are far more challenging as we do not have access to this information. 1https://webis.de/events/semeval-19/ 2http://www.fakenewschallenge.org Articles 10,385 Twitter Users 1,604 -Left 3,931 Pol. Users 135 -Right 2,290 Left Pol. Users 49 -Center 4,164 Right Pol. Users 51 Sources 86 Center Pol. Users 35 Types 94 Avg # shared per Article 23.29 Events 2,020 Avg # pol. users followed 20.36 Table 1: Dataset Statistics 3 Dataset Description We collected 10,385 news articles from two news aggregation websites3 on 2,020 different events discussing 94 event types, such as elections, terrorism, etc. The websites provide news coverage from multiple perspectives, indicating the bias of each article using crowdsourced and editorial reviewed approaches4. We preprocessed all the documents to remove any information about the source of the article. We collected social information consisting of Twitter users who share links to the collected articles. We focused on Twitter users who follow political users and share news articles frequently (100 articles minimum). We found 1,604 such Twitter users. The list of political users was created by collecting information about active politically affiliated users. It consists of 135 Twitter users who are mainly politicians, political journalists and political organizations. The set of political users and Twitter users are disjoint. The summary of the dataset is shown in Table 1. Data Folds We created several data splits to evaluate our model in the supervised settings, based on three criteria: randomly separated, event separated and time separated splits. In the eventseparated case, we divide the news articles such that all articles covering the same news event will appear in a single fold. For the time-separated case, we sort the publication dates (from oldest to latest) and divide them in three folds. Each time one fold is used as training data (33%) and the other two combined as test data (66%). We use the same folds throughout the experiment of supervised classification for evaluation purpose. Constructing the Social Information Graph We represent the relevant relationships as an information graph, similar to the one depicted in Figure 1. The social information graph G = {V, E}, 3Memeorandum.com and Allsides.com 4https://www.allsides.com/media-bias/ media-bias-rating-methods 2597 consisting of several different types of vertices and edges, is defined as follows: • Let P ⊂V denote the set of the political users. These are Twitter users with a clear, selfreported, political bias. They may be the accounts of politicians (e.g., Sarah Palin, Nancy Pelosi), political writers in leading newspapers (e.g., Anderson Cooper) or political organizations (e.g., GOP, House Democrats). Note that even political users that share a general political ideology can differ significantly in the type of issues and agenda they would pursue, which would be reflected in their followers. • Let U ⊂V denote the set of Twitter users that actively spread content by sharing news articles. The political bias of these users is not directly known, only indicated indirectly through the political users they follow on Twitter. • Let A ⊂V denote the set of news articles shared by the Twitter users (U). The graph vertices are connected via a set of edges described hierarchically, as follows: • EUP ⊂E: All the Twitter users are connected to the political users whom they follow. Note that a Twitter user may be connected to many different political users. • EAU ⊂E: All the articles are connected to the Twitter users who share them. Note that an article may be shared by many different Twitter users. 4 Text and Graph Model Our goal is to classify news articles into 3-classes corresponding to their bias. Since we have both the textual and social information for the news articles, we can obtain representations for them using either the text or graph models. In this section, we briefly go through the text representation methods, and then move to describe the graph based models we considered in this paper. 4.1 Text Representations and Linguistic Bias Indicators To predict the bias of the news articles, we can consider it as a document classification task. We use the textual content of a news article to generate a feature representation. Deciding on the appropriate representation for this content is one of the key design choices. Previous works either use traditional, manually engineered representations for capturing bias (Recasens et al., 2013) or use latent representations learned using deep learning methods (Iyyer et al., 2014). We experimented with several different choices of the two alternatives, and compared them by training a classifier for bias prediction over the document directly. The results of these experiments are summarized in Table 2. Due to space constraints, we provide a brief overview of these alternatives, and point to the full description in the relevant papers. Linear BoW Unigram features were used. The articles consist of 77,772 unique tokens. We used TFIDF vectors as unigram features obtained by using scikit-learn (Pedregosa et al., 2011). Bias Features These are content based features drawn from a wide range of approaches described in the literature on political bias, persuasion, and misinformation, capturing structure, sentiment, topic, complexity, bias and morality in the text. We used the resources in (Horne et al., 2018b) to generate 141 features based on the news article text, which were shown to work well for the binary hyper-partisan task (Horne et al., 2018a). Averaged Word Embedding (WE) The simplest approach for using pre-trained word embeddings. An averaged vector of all the document’s words using the pre-trained GloVe word embeddings (Pennington et al., 2014) were used to represent the entire article. Skip-Thought Embedding Unlike the Averaged word vector that does not capture context, we also used a sentence level encoder, Skip-Thought (Kiros et al., 2015), to generate text representations. We regard each document as a long sentence and map it directly to a 4800-dimension vector. Hierarchical LSTM over tokens and sentences We used a simplified version of the Hierarchical LSTM model (Yang et al., 2016b). In this case documents are first tokenized into sentences, then each sentence was tokenized into words. We used a word-level LSTM to construct a vector representation for each sentence, by taking the average of all the hidden states. Then, we ran another single layer unidirectional LSTM over the sentence representations to get the document representation by taking average of all the hidden states. 2598 4.2 Graph-Based Representations In addition to the textual information, the news articles are also part of the information network defined in Section 3. Intuitively, news articles shared by the same Twitter users are likely to have the same bias, and users who share a lot of news in common are close in their political preferences. A similar intuition connects users who follow similar politically affiliated users. Capturing this information allows us to predict the bias of a news article, given its social context. We design our embedding function to map all graph nodes into a low dimensional vector space, such that the graph relationships are preserved in the embedding space. In the shared embedding space, nodes that are connected (or close) in the graph should have higher similarity scores between their vector representations. 4.2.1 Directly Observed Relationships in Graph (DOR) Our first embedding approach aims to preserve the local pairwise proximity between two vertices directly. This is similar to first-order graph embedding methods (Tang et al., 2015). There are two different relations observed in the graph: Twitter user to political user (follow) and news article to Twitter user (share). We construct our embedding over multiple views of the data, each view w corresponds to a specific type of graph relation. We can then define an loss function Lw for each view w as follows: • Twitter User to Political User (UP): This objective maximizes the similarity of a Twitter user, u and all the political users in the set Pu ⊂P, where Pu is the set of political users that u follows. LUP = − X u∈U X p∈Pu logP(p|u) (1) • News Article to Twitter User (AU): This objective maximizes the similarity of a news articles, a and all the Twitter users in the set Ua ⊂U, where Ua is the set of Twitter users who shared news article a on Twitter. LAU = − X a∈A X u∈Ua logP(u|a) (2) All the conditional probabilities can be computed using a softmax function. Taking P(p|u) as an example: P(p|u) = exp(eT u ep) P q∈P exp(eTu eq) (3) where eu and ep are embeddings of twitter user u and political user p respectively. Computing Eq. 1 and Eq. 2 can be expensive due to the size of the network. To address this problem, we refer to the popular negative sampling approach (Mikolov et al., 2013), which reduce the time complexity to be proportional to the number of positive example pairs (i.e. number of edges in our case). The loss defined for the two views are summed with the classification loss defined in Eq. 9 as the final loss function to be optimized in DOR embedding model. LDOR = Lclf + LUP + LAU (4) 4.2.2 Graph Convolutional Networks (GCN) Graph Convolutional Networks is an efficient variant of convolutional neural networks which operate directly on graphs. It can be regarded as special cases of a simple differentiable message-passing framework (Gilmer et al., 2017): h(l+1) i = σ X j∈N(i) M(l)(h(l) i , h(l) j ) ! (5) where h(l) i ∈Rd(l) is the hidden state of node vi in the l-th layer of the neural network, with d(l) as the dimensionality of representation at layer l. N(i) is the set of direct neighbors of node vi (usually also include itself). Incoming messages from the local neighborhood are aggregated together and passed through the activation function σ(·), such as tanh(·). M(l) is typically chosen to be a (layer-specific) neural network function. Kipf and Welling (2017) used a simple linear transformation M(l)(ht i, ht j) = W (l)hj where W (l) is a layer-specific weight matrix. This linear transformation has been shown to propagate information effectively on graphs. It leads to significant improvements in node classification (Kipf and Welling, 2017), link prediction (Kipf and Welling, 2016), and graph classification (Duvenaud et al., 2015). One GCN layer can be expressed as follows: H(l+1) = σ( ˆAH(l)W (l)) (6) 2599 where ˆA is the normalized adjacency matrix, and W (l) is the layer-specific trainable weight matrix. H(l) ∈RN×D(l) is the matrix of hidden states in the l-th layer. H(0) = X is the input vectors. It can either be one-hot representations of nodes or features of the nodes if available. σ(·) is the activation function. Multiple GCN layers can be stacked in order to capture high-order relations in the graph. We consider a two-layer GCN in this paper for semisupervised node classification. Our forward model takes the form: V = tanh  ˆA tanh  ˆAXW (0) W (1) (7) where X is the input matrix with one-hot representations and V is the representation matrix for all nodes in the graph. Figure 2 shows an example of how our GCN model aggregates information from a node’s local neighborhood. The orange document is the node of interest. Blue edges link to first order neighbors and green edges link to second order neighbors. Figure 2: Example of Unfolding of GCN Computational Graph 4.3 Document Classification The representation v of a news article (obtained with text models or graph models) captures the high level information of the document. It can be used as features for predicting the bias label with a feed-forward network. p = softmax(Wcv + bc) (8) We use the negative log likelihood of the correct labels as classification training loss: Lclf = − X a log paj (9) where j is the bias label of news article a. 5 Joint Model Given that we have two representations available for news articles, namely the textual one and social one, it is natural to make the prediction combining both of them. We propose to align the representations of the same document from graph and text models in a joint training fashion as shown in Figure 3. The objective function for the alignment is: Lalign = − X a∈A log P(eG a |eT a ) (10) where eT a is the embedding for document a based on its content, and eG a is the embedding for document a based on graph structures. P(eG a |eT a ) is defined the same way as in Eq. 3. Negative sampling is again utilized to reduce time complexity. Connecting the text and graph embedding of the same news articles, allows the bias signal to flow between the two sides. Therefore the text model may learn from the social signal and the graph model may use textual content to adjust its representation as well. We describe the loss function for the joint model in two settings - full supervision (i.e., labels associated with documents directly) and distant supervision, when bias information is only provided for a handful of users, which do not actively share documents. Full Supervision In the full supervision case, the loss consists of three parts, namely the classification loss of text model (LT clf), the classification loss of graph model (LG clf), and the loss for aligning the embeddings of the text and the graph models (Lalign). Ljoint = αLT clf + βLG clf + γLalign (11) Here α, β and γ are hyper-parameters to adjust the contribution of the three parts. We set all of them to default value 1 in experiments in this paper. Distant Supervision Unlike the full supervision case where we have training labels for documents, we only have access to the labels of political users. However, since the text and social representation use the same space, user bias information can be propagated to the document representation, acting as a distant supervision source. Inference Given the graph representation, decisions can be made in multiple ways. Each document has a dual representation, as a text node and a social node. Also, given the social context of a 2600 Text Representations 𝑎ଵ Graph-Based Representations 𝑎ଶ 𝑎ଷ 𝑎ସ 𝑎ହ 𝑎଺ 𝑢ଵ 𝑢ଶ 𝑢ଷ 𝑢ସ 𝑝ଵ 𝑝ଶ 𝑝ଷ 𝑎ଵ 𝑎ଶ 𝑎ଷ 𝑎ସ 𝑎ହ 𝑎଺ Classification Module Alignment Figure 3: Overall Architecture: Representations are learned for news articles based on textual information and graph structure; these two representations are aligned in our joint model; only labels of political users are available during training in distant supervision case document, decision can be defined over the users that share it (assuming that users tend to share information which agrees with their biases). To take advantage of that fact, we define a simplified inference process. At test time, we can predict the bias of a news article with the embeddings from text model (Text), the embeddings from graph model (Graph), and the embeddings of sharing users who shared this article (User). The last method (User) works by averaging bias prediction scores sb u for all Twitter users that shared an article a. The bias prediction score is computed in Eq. 8 before the softmax(·) applied. arg max b P u∈Ua sb u |Ua| (12) Finally, two or three of the scores listed above can be combined to make the decision. 6 Experiments We designed our experiments to evaluate the contribution of social information in both the fully supervised setting, and when only distant supervision is available through the social graph. We begin by evaluating several text classification models that help contextualize the social information. Finally, we evaluate our model’s ability to make predictions when very little social information is available at test time. 6.1 Implementation Details We used the spaCy toolkit for preprocessing the documents. All models are implemented with PyTorch (Paszke et al., 2017). Hyperbolic tangent (tanh) is used as non-linear activation function. We use feed-forward neural network with one hidden layer for the bias prediction task given textual or social representation. The sizes of LSTM hidden states for both word level and sentence level are 64. The sizes of hidden states for both GCN layers are 16. For the training of the neural network, we used the Adam optimizer (Kingma and Ba, 2014) to update the parameters. We use 5% of the training data as the validation set. We run the training for 200 epochs (50 epochs for HLSTM models), and select the best model based on performance on validation set. Other parameters in our model includes negative sample size k=5, mini-batch size b=30 (mini-batch update only used for HLSTM models). The learning rate is 0.001 for HLSTM models and 0.01 otherwise. 6.2 Experimental Results Text Classification Results The result of supervised text classification is summarized in Table 2. We report the accuracy of bias prediction. Results clearly show that HLSTM outperforms the other methods in supervised text classification setting. Also, adding the hand engineered bias features with HLSTM representation does not help to 2601 improve performance. Model Split Text Majority Rand 40.10 Event 40.10 Time 40.50 Linear BoW Rand 58.47 Event 59.88 Time 55.41 Bias Feat. Rand 54.06 Event 53.51 Time 52.96 Avg WE Rand 59.37 Event 59.37 Time 53.46 SkipThought Rand 68.67 Event 66.35 Time 60.89 HLSTM Rand 74.59 Event 73.55 Time 66.98 HLSTM + Bias Feat. Rand 69.32 Event 69.87 Time 66.79 Table 2: Supervised Classification Using Textual Features Network Classification Results We show the results of predicting bias using graph information alone, without text, in Table 3. The GCN model outperforms DOR significantly in each of the four settings. Similar to the text classification results, performance on random and event splits are comparable. However, there is a sharp drop in performance for time split. This can be explained by the fact that temporally separated news events will discuss different entities and world events and as a result will have very different word distributions. Event-separated splits are less susceptible to this problem, as similar figures and topics are likely to be discussed in different events. Model Split Graph User G+U DOR Rand 74.74 72.02 74.57 Event 74.87 72.74 75.18 Time 65.65 65.07 65.36 Dist 56.45 56.95 56.54 GCN Rand 88.65 78.83 88.89 Event 88.78 76.11 88.70 Time 81.14 71.31 82.00 Dist 63.72 40.08 67.03 Table 3: Classification Results Using Social Relations in Full Supervised and Distant Supervised Setting Joint Model Results Table 4 shows the results of our joint model. When aligning the text and graph embeddings using joint training, both show improvement, and prediction with text or graph representations alone is better than those listed in Table 2 and 3, especially for text. Note that the increase in accuracy is much greater for the more expressive HLSTM model. Making prediction with the aggregation of multiple scores usually leads to better accuracy. Interestingly, the model’s distant supervision performance is almost comparable with fully supervised text classification results. This demonstrates the strength of our joint model, and its ability to effectively propagate label information from users down to documents. We also evaluated our model when smaller amounts of social information was available at test time. We tested our joint model with only 50% and 10% of the links for test articles kept. The results are summarized in Table 5. Clearly the performance improves as more social links are available. However, even with little social links provided in the latter case, our joint model propagates information effectively and results in an increase in performance compared to text classification. Qualitative Analysis In Table 6, we compared the bias prediction by our text and joint model on several news articles (only titles shown in the table). These examples demonstrate the subtlety of bias expression in text, which helps motivate social representations to support the decision. 7 Conclusion In this paper we follow the intuition that the political perspectives expressed in news articles will also be reflected in the way the documents spread and the identity of the users who endorse them. We suggest a GCN-based model capturing this social information, and show that it provides a distant supervision signal, resulting in a model performing comparably to supervised text classification models. We also study this approach in the supervised setting and show that it can significantly enhance a text-only classification model. Modeling the broader context in which text is consumed is a vital step towards getting a better understanding of its perspective. We intend to study fine-grained political perspectives, capturing how different events are framed. Acknowledgements We thank the reviewers for their insightful comments. This work was partially supported by a Google Gift. 2602 Model Split Graph User G+U Text G+T G+U+T GCN + SkipThought Rand 89.95 81.49 89.75 70.61 90.34 91.02 Event 89.40 79.06 89.64 69.16 90.15 90.78 Time 84.95 76.59 85.30 64.12 84.09 86.25 Dist 67.78 45.30 70.03 58.68 69.82 70.66 GCN + HLSTM Rand 89.03 83.66 88.57 86.84 91.48 91.74 Event 89.34 80.22 88.62 88.39 91.69 91.72 Time 84.83 74.50 85.09 81.36 85.57 86.21 Dist 71.74 69.39 71.16 61.13 72.16 71.85 Table 4: Results of Joint Model Combining Text and Graph Relations Model Split Graph User G+U Text G+T G+U+T GCN + HLSTM (50%) Rand 86.73 78.62 86.24 85.62 89.31 89.35 Event 86.55 78.34 85.89 84.52 89.21 89.51 Time 82.25 70.93 81.45 80.05 85.57 85.48 GCN + HLSTM (10%) Rand 76.13 57.76 75.55 78.61 81.35 81.49 Event 76.58 57.10 75.75 77.60 80.55 80.93 Time 73.24 54.09 72.48 72.92 76.52 76.75 Table 5: Results of Joint Model with Reduced Links for Test Documents Text Joint Gold Title Right Right Right Hacked Powell email reveals Hillary hates Obama for 2008 Right Right Right Donald Trump will let James Comey testify Center Center Center Clinton: I am done with being a candidate Center Center Center Senate confirms Sessions as attorney general Left Left Left Clinton: Trump Doesn’t See President Obama as an American Video Left Left Left Trump uses Twitter to promote leaked intelligence on North Korea Center Left Left Hillary Clinton’s Campaign Says It Will Participate In Wisconsin Recount Left Center Center Supreme Court justices hint at striking Voting Rights Act provision Left Center Right Boston Marathon bombs: how investigators use technology to identify suspects Right Right Left Israel risks becoming apartheid state if peace talks fail, says John Kerry Table 6: Examples of Bias Prediction by Text and Joint Model References Eric Baumer, Elisha Elovic, Ying Qin, Francesca Polletta, and Geri Gay. 2015. Testing and comparing computational approaches for identifying the language of framing in political news. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1472–1482, Denver, Colorado. Association for Computational Linguistics. Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2010. Vocabulary choice as an indicator of perspective. In Proceedings of the ACL 2010 Conference Short Papers, pages 253–257, Uppsala, Sweden. Association for Computational Linguistics. Ceren Budak, Sharad Goel, and Justin M Rao. 2016. Fair and balanced? quantifying media bias through crowdsourced content analysis. Public Opinion Quarterly, 80(S1):250–271. Dallas Card, Justin Gross, Amber Boydstun, and Noah A. Smith. 2016. Analyzing framing through the casts of characters in the news. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1410–1420, Austin, Texas. Association for Computational Linguistics. Yoonjung Choi and Janyce Wiebe. 2014. +/EffectWordNet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1181–1191, Doha, Qatar. Association for Computational Linguistics. David Duvenaud, Dougal Maclaurin, Jorge AguileraIparraguirre, Rafael G´omez-Bombarelli, Timothy Hirzel, Al´an Aspuru-Guzik, and Ryan P. Adams. 2015. Convolutional networks on graphs for learning molecular fingerprints. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS’15, pages 2224–2232, Cambridge, MA, USA. MIT Press. Heba Elfardy, Mona Diab, and Chris Callison-Burch. 2015. Ideological perspective detection using semantic features. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 137–146, Denver, Colorado. Association for Computational Linguistics. Anjalie Field, Doron Kliger, Shuly Wintner, Jennifer Pan, Dan Jurafsky, and Yulia Tsvetkov. 2018. Framing and agenda-setting in Russian news: a computational analysis of intricate political strategies. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2603 pages 3570–3580, Brussels, Belgium. Association for Computational Linguistics. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1263–1272, International Convention Centre, Sydney, Australia. PMLR. Maria Glenski, Tim Weninger, and Svitlana Volkova. 2018. Identifying and understanding user reactions to deceptive and trusted social news sources. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 176–181, Melbourne, Australia. Association for Computational Linguistics. Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 503–511, Boulder, Colorado. Association for Computational Linguistics. Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable feature learning for networks. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 855–864, New York, NY, USA. ACM. Benjamin D. Horne, William Dron, Sara Khedr, and Sibel Adali. 2018a. Assessing the news landscape: A multi-module toolkit for evaluating the credibility of news. In Companion Proceedings of the The Web Conference 2018, WWW ’18, pages 235–238, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Benjamin D. Horne, Sara Khedr, and Sibel Adali. 2018b. Sampling the news producers: A large news and feature data set for the study of the complex media landscape. In Proceedings of the Twelfth International Conference on Web and Social Media, ICWSM 2018, Stanford, California, USA, June 2528, 2018., pages 518–527. Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014. Political ideology detection using recursive neural networks. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1113–1122, Baltimore, Maryland. Association for Computational Linguistics. Kristen Johnson and Dan Goldwasser. 2016. “all I know about politics is what I read in twitter”: Weakly supervised models for extracting politicians’ stances from twitter. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2966– 2977, Osaka, Japan. The COLING 2016 Organizing Committee. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Thomas N. Kipf and Max Welling. 2016. Variational graph auto-encoders. CoRR, abs/1611.07308. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 712, 2015, Montreal, Quebec, Canada, pages 3294– 3302. Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander Hauptmann. 2006. Which side are you on?: Identifying perspectives at the document and sentence levels. In Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X ’06, pages 109–116, Stroudsburg, PA, USA. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems Volume 2, NIPS’13, pages 3111–3119, USA. Curran Associates Inc. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011, Suntec, Singapore. Association for Computational Linguistics. Fred Morstatter, Liang Wu, Uraz Yavanoglu, Stephen R. Corman, and Huan Liu. 2018. Identifying framing bias in online news. Trans. Soc. Comput., 1(2):5:1–5:18. Shirui Pan, Jia Wu, Xingquan Zhu, Chengqi Zhang, and Yang Wang. 2016. Tri-party deep network representation. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI’16, pages 1895–1901. AAAI Press. 2604 Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Ayush Patwari, Dan Goldwasser, and Saurabh Bagchi. 2017. TATHYA: A multi-classifier system for detecting check-worthy statements in political debates. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 2259–2262. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 701– 710, New York, NY, USA. ACM. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for analyzing and detecting biased language. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1650–1659, Sofia, Bulgaria. Association for Computational Linguistics. Filipe Nunes Ribeiro, Lucas Henrique, Fabr´ıcio Benevenuto, Abhijnan Chakraborty, Juhi Kulshrestha, Mahmoudreza Babaei, and Krishna P. Gummadi. 2018. Media bias monitor: Quantifying biases of social media news outlets at large-scale. In Proceedings of the Twelfth International Conference on Web and Social Media, ICWSM 2018, Stanford, California, USA, June 25-28, 2018., pages 290–299. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In The Semantic Web - 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, pages 593–607. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15, pages 1067–1077, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Svitlana Volkova, Kyle Shaffer, Jin Yea Jang, and Nathan Hodas. 2017. Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 647–653, Vancouver, Canada. Association for Computational Linguistics. Robert West, Hristo S. Paskov, Jure Leskovec, and Christopher Potts. 2014. Exploiting social network structure for person-to-person sentiment analysis. Transactions of the Association for Computational Linguistics, 2. Yi Yang, Ming-Wei Chang, and Jacob Eisenstein. 2016a. Toward socially-infused information extraction: Embedding authors, mentions, and entities. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1452–1461, Austin, Texas. Association for Computational Linguistics. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016b. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489, San Diego, California. Association for Computational Linguistics. Daniel Xiaodan Zhou, Paul Resnick, and Qiaozhu Mei. 2011. Classifying the political leaning of news articles and users from user votes. In Proceedings of the Fifth International Conference on Weblogs and Social Media, Barcelona, Catalonia, Spain, July 1721, 2011.
2019
247
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2605–2610 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2605 Fine-Grained Spoiler Detection from Large-Scale Review Corpora Mengting Wan∗, Rishabh Misra†, Ndapa Nakashole∗, Julian McAuley∗ ∗University of California, San Diego, †Amazon.com, Inc {m5wan, r1misra, nnakashole, jmcauley}@ucsd.edu Abstract This paper presents computational approaches for automatically detecting critical plot twists in reviews of media products. First, we created a large-scale book review dataset that includes fine-grained spoiler annotations at the sentence-level, as well as book and (anonymized) user information. Second, we carefully analyzed this dataset, and found that: spoiler language tends to be book-specific; spoiler distributions vary greatly across books and review authors; and spoiler sentences tend to jointly appear in the latter part of reviews. Third, inspired by these findings, we developed an end-to-end neural network architecture to detect spoiler sentences in review corpora. Quantitative and qualitative results demonstrate that the proposed method substantially outperforms existing baselines. 1 Introduction ‘Spoilers’ on review websites can be a concern for consumers who want to fully experience the excitement that arises from the pleasurable uncertainty and suspense of media consumption (Loewenstein, 1994). Certain review websites allow reviewers to tag whether their review (or sentences in their reviews) contain spoilers. However, we observe that in reality only a few users utilize this feature. Thus, requiring sentence-level spoiler annotations from users is not a successful approach to comprehensive fine-grained spoiler annotation. One possible solution is crowdsourcing: whereby consumers can report reviews that reveal critical plot details. This is complementary to the self-reporting approach, but may have scalability issues as it is relatively difficult to engage sufficient consumers in a timely fashion. Therefore, we seek to address the lack of completeness exhibited by self-reporting and crowdsourcing. We instead focus on developing machine learning techniques • This was a perfect, albeit bloody, end to the series. • Though there were deaths that were definitely unwarranted: <spoiler>Fred Hedwig Moody Tonks Lupin Dobby,</spoiler> there were some really heartfelt and memorable moments: <spoiler>Narcissa saving Harry, Ron coming back, Hermione and Ron, Harry and Ginny, Molly killing Bellatrix, etc.</spoiler> • I wish we could have spent more time at Hogwarts, as one of my favorite characters, the amazing Minerva McGonagall, resides there, and we couldn’t see more of her amazingness in the Battle of Hogwarts. • Harry Potter was a really, really great series that I think will be (and is) timeless. Harry Potter and the Deathly Hallows www.goodreads.com/book/show/136251 p=0.35 p=0.81 p=0.44 p=0.06 predictions from SpoilerNet review document review subject (i.e., item) review author (i.e., user) Figure 1: An example review from Goodreads, where spoiler tags and the predicted spoiler probabilities from SpoilerNet are provided. to automatically detect spoiler sentences from review documents. Related Work. Surprisingly, we find that spoiler analysis and detection is a relatively unexplored topic; previous work focuses on leveraging simple topic models (Guo and Ramakrishnan, 2010), or incorporating lexical features (e.g. unigrams) (Boyd-Graber et al., 2013; Iwai et al., 2014), frequent verbs and named entities (Jeon et al., 2013), and external meta-data of the review subjects (e.g. genres) (Boyd-Graber et al., 2013) in a standard classifier such as a Support Vector Machine. Deep learning methods were first applied to this task by a recent study (Chang et al., 2018), where the focus is modeling external genre information. Possibly due to the lack of data with complete review documents and the associated user (i.e., the review author) and item (i.e., the subject to review) ids, issues such as the dependency among sentences, the user/item spoiler bias, as well as the sentence semantics under different item contexts, have never been studied in this domain. Neural network approaches have achieved great success on sentence/document classification tasks, including CNN-based approaches (Kim, 2014), 2606 0.0 0.5 1.0 avg. position 0.0 0.1 0.2 frequency (a) 0 10 20 #sent. in each span 0.00 0.25 0.50 0.75 frequency real rand (b) non-spoiler spoiler 0.07 0.08 0.09 Item Specificity (DF-IIF) (c) rank of token 0.0 0.5 1.0 1.5 DF-IIF harry, rowling, potter, snape, dumbledore, hallows, ron, hermione, voldemort, hogwarts Harry Potter #7 (d) 0.0 0.5 1.0 %spoiler doc. per item/user 0 1 2 3 4 5 log10( #items/users ) item user (e) Figure 2: Distributions of (a) average spoiler sentence position; (b) the length of each spoiler span; (c) itemspecificity of non-spoiler and spoiler sentences (sample means and 95% confidence intervals); (d) DF-IIF of each term and top ranked item-specific terms for an example book; (e) the percentage of spoiler reviews per book/user. RNN-based approaches (Yang et al., 2016), and self-attention-based approaches (Devlin et al., 2018). In this study, we cast the spoiler sentence detection task as a special sentence classification problem, but focus on modeling domain-specific language patterns. Contributions. To address real-world, large-scale application scenarios and to facilitate the possibility of adopting modern ‘data-hungry’ language models in this domain, we collect a new largescale book review dataset from goodreads.com. Spoiler tags in this dataset are self-reported by the review authors and are sentence-specific, which makes it an ideal platform for us to build supervised models. Motivated by the results from preliminary analysis on Goodreads, we propose a new model SpoilerNet for the spoiler sentence detection task. Using the new Goodreads dataset and an existing small-scale TV Tropes dataset (BoydGraber et al., 2013), we demonstrate the effectiveness of the proposed techniques. 2 The Goodreads Book Review Dataset We scraped 1,378,033 English book reviews, across 25,475 books and 18,892 users from goodreads.com, where each book/user has at least one associated spoiler review. These reviews include 17,672,655 sentences, 3.22% of which are labeled as ‘spoiler sentences.’ To our knowledge, this is the first dataset with fine-grained spoiler annotations at this scale. This dataset is available at https://github.com/MengtingWan/ goodreads. Appearance of Spoiler Sentences. We first analyze the appearance of spoiler sentences in reviews by evaluating 1) the average position of spoiler sentences in a review document and 2) the average number of sentences in a spoiler span (a series of consecutive spoiler sentences). We present the first evaluation in Figure 2a. Compared with the expected average position of randomly sampled sentences (0.5), we observe that spoiler contents tend to appear later in a review document. For the second evaluation, we create a benchmark distribution by randomly sampling sentences within reviews and averaging the length of each span formed by those sentences. From Figure 2b, compared with this random benchmark, we notice that real-world spoiler sentences tend to be ‘clumped’ (i.e., more sentences in each span). Item-Specificity. As book-specific terms such as locations or characters’ names could be informative to reveal plot information (Jeon et al., 2013), we develop an effective method to identify the specificity of tokens regarding each item (i.e., each book) as follows:1 • (Popularity) For word w, item i, we calculate the item-wise document frequency (DF) as DF w,i = |Dw,i| |Di| ; • (Uniqueness) For each word w, we calculate its inverse item frequency (IIF) as IIF w = log |I|+ϵ |Iw|+ϵ; • Then for each term w, item i, we are able to obtain the DF-IIF as DF w,i × IIF w. We show the distributions of the average DFIIF values of randomly sampled non-spoiler and spoiler sentences in Figure 2c, where we find spoilers are likely to be more book-specific. The ranking of terms for the book Harry Potter #7 is presented in Figure 2d, where we find that all of the top 10 terms refer to the character/author names and important plot points. Item/User Spoilers and Self-Reporting Bias. We further investigate the fraction of reviews containing spoiler content per item/user to analyze the spoiler appearance tendencies for each item and 1|Di|: #reviews associated with i; |Dw,i|: #reviews containing word w; |Iw|: #items containing w; |I|: the total number of items. ϵ = 1 is a smoothing term. 2607 𝒃𝒊𝒃𝒖 ℎ% & ℎ' & ℎ( & ℎ) & ℎ% * ℎ' * ℎ( * ℎ) * ℎ%,% (&) ℎ%,% (*) ℎ%,' (&) ℎ%,' (*) ℎ%,( (&) ℎ%,( (*) ℎ%,) (&) ℎ%,) (*) 𝛼%,% 𝛼%,' 𝛼%,( 𝛼%,) /𝑦% /𝑦' /𝑦( /𝑦) Output Layer Sentence Encoder Word Attention Word Encoder Input Layer item bias (𝒊: review subject) user bias (𝒖: review author) text embedding item-specificity 𝒆𝒘𝟏,𝟏𝒇𝒘𝟏,𝟏,𝒊 𝒆𝒘𝟏,𝟐𝒇𝒘𝟏,𝟐,𝒊 𝒆𝒘𝟏,𝟑𝒇𝒘𝟏,𝟑,𝒊 𝒆𝒘𝟏,𝟒𝒇𝒘𝟏,𝟒,𝒊 Figure 3: Model architecture of SpoilerNet user (Figure 2e). We notice that the distributions are highly skewed indicating significantly different spoiler tendencies across users and items. Summary of Insights. We summarize the obtained insights as follows: 1) Spoiler sentences generally tend to appear together in the latter part of a review document, which indicates the dependency among sentences and motivates us to consider encoding such information in a spoiler detection model; 2) Item-specificity could be useful to distinguish spoiler contents; 3) Distributions of self-reported spoiler labels are dramatically different across users and items, which motivates us to explicitly calibrate them in the model design. 3 The Proposed Approach: SpoilerNet We formulate the predictive task as a binary classification problem: given a sentence s in a review document, we aim to predict if it contains spoilers (ys = 1) or not (ys = 0). We introduce SpoilerNet, which extends the hierarchical attention network (HAN) (Yang et al., 2016) by incorporating the above insights. We use the sentence encoder in HAN to model the sequential dependency among sentences. We incorporate the item-specificity information in the word embedding layer to enhance word representations based on different item (e.g. book) contexts. Item and user bias terms are included in the output layer to further alleviate the disparity of spoiler distributions. Figure 3 shows the overall architecture of our proposed SpoilerNet. We briefly describe each layer of this network as follows. Input Layer. For each word w, we introduce a K-dimensional text embedding ew to represent its lexical information, which is shared across the corpus. For each word in each sentence, we calculate its corresponding item specificity features: fw,i = [DF w,i, IIF w, DF w,i × IIF w]. We expect this component could help distinguish different word semantics under different contexts (e.g., ‘Green’ indicates a character’s name with high item-specificity while it represents a color otherwise). The concatenated vector [ew; fi,w] is used as the input word embedding of word w in sentence s. Word Encoder, Word Attention, and Sentence Encoder. Next we pass words through bidirectional recurrent neural networks (bi-RNN) with Gated Recurrent Units (GRU) (Cho et al., 2014). GRUs accept a sequence of input embedding vectors xt and recursively encode them into hidden states ht. Words are fed sequentially through a GRU and in reverse order through another GRU. Then we use the concatenation of these forward and backward hidden state vectors hw = [h(f) w ; h(b) w ] to represent a word w in a sentence s. Then we introduce a word attention mechanism to focus on revelatory words (e.g., ‘kill’, ‘die’), which yields µw =tanh(Wahw + ba), αw = exp(νT µw) P w′∈s, exp(νT µw′), vs = X w∈s αwhw, where Wa, ba and ν are model parameters. The weighted sums vs are used as an input vector to represent sentence s in the following sentencelevel model. Within each review, we pass the sentence input vectors {vs} to another bi-RNN with GRU to encode the sequential dependency among sentences. We concatenate the resulting forward and backward hidden states to get the final representation of a sentence, i.e., hs = [h(f) s ; h(b) s ]. Output Layer. The spoiler probability of a sentence s can be calculated as ps = σ(wT o hs + bi + bu + b). Here for each item i and each user u, we introduce learnable parameters bi, bu to model the item and user biases which can not be explained by the language model. Then we consider minimizing the following training loss L = X (ys log ps + η(1 −ys) log(1 −ps)) , where η is a hyper-parameter used to balance positive and negative labels in the training data. 2608 4 Experiments We consider the following two datasets: Goodreads. We use the top 20,000 frequent unigrams as our vocabulary. We randomly select 20% of the reviews for testing. Among the remaining 80%, we separate 10,000 reviews for validation and use all other reviews for training. As the distribution of spoiler labels is severely imbalanced, we decrease the weight of negative labels to η = 0.05, which yields best results among {0.05, 0.1, 0.2, 0.5} on the validation set. TV Tropes is a small-scale benchmark dataset collected from tvtropes.org (Boyd-Graber et al., 2013). This dataset contains 16,261 singlesentence comments about 884 TV programs, which have been partitioned into 70/10/20 training/validation/test splits. All unigrams are kept in the vocabulary. As it is a balanced dataset (52.72% of the sentences are spoilers), we set η = 1. We use the ADAM optimizer (Kingma and Ba, 2014) with a learning rate of 0.001, a fixed batch size (64) and dropout (0.5) in the fully connected output layer. The dimensionalities of all hidden states and the context attention vector ν are set to 50. Word embeddings are initialized with pretrained fasttext word vectors (Joulin et al., 2016). Baselines. We consider the following baselines: • SVM. Similar to previous studies (BoydGraber et al., 2013; Jeon et al., 2013), we apply SVM with a linear kernel where counts of words are used as features. • SVM-BOW. Weighted averages of fasttext word embeddings (Joulin et al., 2016) are used as sentence features, where the weights are TfIdfs. • CNN. textCNN (Kim, 2014) is applied where we use filter sizes 3,4, and 5, each with 50 filters. • HAN. The item-specificity features and the item/user bias terms are removed from SpoilerNet. This can be regarded as a variant of HAN (Yang et al., 2016). We add the item-specificity features and the item/user bias respectively on the above baselines to evaluate their effectiveness. We remove each of the word attention module, the pre-trained word embedding initialization, and the sentence encoder from HAN to evaluate their performance. Evaluation. Due to the possible subjectivity of users’ self-reported spoiler tags (i.e., different Goodreads TV Tropes AUC AUC(d.) AUC Acc. SVM 0.744 0.790 0.730 0.657 + item-spec. 0.746 ↑0.800 ↑ 0.747 ↑0.653 ↓ + bias 0.864 ↑0.793 ↑ 0.722 ↓0.536 ↓ SVM-BOW 0.692 0.729 0.756 0.702 + item-spec. 0.693 ↑0.734 ↑ 0.774 ↑0.710 ↑ + bias 0.838 ↑0.742 ↑ 0.753 ↓0.704 ↑ CNN 0.777 0.825 0.774 0.709 + item-spec. 0.783 ↑0.827 ↑ 0.790 ↑0.723 ↑ + bias 0.812 ↑0.822 ↓ 0.781 ↑0.711 ↑ - word attn. 0.898 ↓0.880 ↓ 0.760 ↓0.695 ↓ - word init. 0.900 ↓0.880 ↓ 0.702 ↓0.652 ↓ - sent. encoder 0.790 ↓0.836 ↓ HAN 0.901 0.884 0.783 0.720 + item-spec. 0.906 ↑0.889 ↑ 0.803 ↑0.733 ↑ + bias 0.916 ↑0.887 ↑ 0.789 ↑0.729 ↑ SpoilerNet 0.919 0.889 0.803 0.737 Table 1: Spoiler sentence detection results on Goodreads and TV Tropes, where arrows indicate the performance boost (↑) or drop (↓) compared with the base model in each group. Best results are highlighed. users may maintain different standards for various review subjects), we regard the area under the ROC curve (AUC) as our primary evaluation metric, i.e., we expect a positive spoiler sentence is ranked higher than a negative non-spoiler sentence based on ps. For Goodreads, we also calculate the sentence ranking AUC within each review document and report the average across reviews. Note this averaged document AUC is invariant of item/user self-reporting bias, thus the language model can be evaluated exclusively. We also report accuracy on TV Tropes so that our results can be fairly compared with existing studies (Boyd-Graber et al., 2013; Chang et al., 2018). Results. Spoiler detection results are presented in Table 1, where the complete SpoilerNet model consistently and substantially outperform baselines on both datasets. The accuracy that SpoilerNet achieved on TV Tropes beats the highest one among existing methods without using external item genre information (0.723), but is slightly lower than the best published result (0.756) where a genre encoder is applied (Chang et al., 2018). We notice adding the item-specificity and user/item bias generally improves the performance of most baselines except SVM on TV Tropes. We find the 2609 pre-trained word embedding initialization is particularly important on TV Tropes. One possible reason could be that the model capacity is too large compared with this dataset so that it easily overfits without proper initialization. Note that a substantial performance drop can be observed by removing the sentence encoder on Goodreads, which validates the importance of modeling sentence dependency in this task. 5 Error Analysis We provide case studies to understand the limitations of the proposed model. We show review examples for three popular books Murder on the Orient Express, The Fault in Our Stars, and The Hunger Games respectively. For each example, we provide the review text, the groudtruth spoiler tags (i.e., if a sentence contains spoilers or not) and the predicted spoiler probabilities from SpoilerNet. Distracted by Revelatory Terms. We find the majority of false positively predicted sentences from SpoilerNet can be found in this category. As shown in Table 2, the proposed network could be easily distracted by revelatory terms (e.g. ‘murder’, ‘killed’). This leads to a potential direction for improvement: emphasizing ‘difficult’ negative sentences with revelatory terms during training (e.g. by ‘hot’ negative sampling) such that the semantic nuances can be addressed. Prob. Label Review Text 0.35 False Language: Low (one/two usages of d*mn) 0.32 False Religion: None 0.39 False Romance: None 0.59 False Violence: Low (It’s a murder mystery! Someone is killed, but it is only ever talked about.) Table 2: An example review for the book Murder on the Orient Express. Distracted by Surrounding Sentences. Although the model is able to capture the ‘coagulation’ of spoilers (i.e., spoiler sentences tend to appear together), it can be distracted by such a property as well. As presented in Table 3, the third sentence was mistakenly predicted possibly because it immediately follows a spoiler sentence and contains an item-specific revelatory term (the character name ‘Hazel’). This indicates the current model still needs to comprehend fine-grained sentence dependencies, so that it can decide whether to propagate or ignore the surrounding spoiler signals under different contexts. Prob. Label Review Text 0.08 False This is not your typical teenage love story. 0.86 True In fact it doesn’t even have a happy ending. 0.70 False I have to say Hazel with all her pragmatism and intelligence has won me over. 0.43 False She is on the exact opposite side of the spectrum than characters like the hideous Bella Swan. Table 3: An example review for the book The Fault in Our Stars. Inconsistent Standards of Spoiler Tags. We find some self-reported labels are relatively controversial, which also verifies our suspicion regarding the subjectivity of spoiler tags. As shown in Table 4, the last sentence was classified as ‘nonspoiler’ by the language model, while reported by the review author as the opposite, probably due to its close connection to the previous spoiler sentence. Note that such an example is difficult to justify even by human annotators. This motivates us to consider spoiler detection as a ranking task instead of conventional binary classification. In this way sentences can be legitimately evaluated in the same context (e.g. the same review document) regardless of absolute thresholds. Besides the evaluation metrics, ranking losses can also be considered in future studies. Prob. Label Review Text 0.01 False The writing is simplistic, a little more so than befits even the 1st-person narrative of a 16year-old. 0.50 True One of things I liked best about this is having a heroine who in addition to acting for the cameras, also has to fake her affection to someone who reciprocates far more than she feels. 0.15 True I found it very relatable. Table 4: An example review for the book The Hunger Games. 6 Conclusions and Future Work Our new dataset, analysis of spoiler language, and positive results facilitate several directions for future work. For example, revising spoiler contents in a ‘non-spoiler’ way would be an interesting language generation task. In addition to review semantics, syntax information could be incorporated in a spoiler language model. The Goodreads dataset may also serve as a powerful spoiler source corpus. Models and knowledge learned on this dataset could be transferred to other corpora where spoiler annotations are limited or unavailable (e.g. detecting spoilers from tweets). 2610 References Jordan L. Boyd-Graber, Kimberly Glasgow, and Jackie Sauter Zajac. 2013. Spoiler alert: Machine learning approaches to detect social media posts with revelatory information. In ASIS&T Annual Meeting. Buru Chang, Hyunjae Kim, Raehyun Kim, Deahan Kim, and Jaewoo Kang. 2018. A deep neural spoiler detection model using a genre-aware attention mechanism. In PAKDD. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Sheng Guo and Naren Ramakrishnan. 2010. Finding the storyteller: Automatic spoiler tagging using linguistic cues. In COLING. Hidenari Iwai, Yoshinori Hijikata, Kaori Ikeda, and Shogo Nishida. 2014. Sentence-based plot classification for online review comments. In WI-IAT. Sungho Jeon, Sungchul Kim, and Hwanjo Yu. 2013. Don’t be spoiled by your friends: Spoiler detection in TV program tweets. In ICWSM. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv´e J´egou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. CoRR, abs/1612.03651. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. George Loewenstein. 1994. The psychology of curiosity: A review and reinterpretation. Psychological bulletin, 116(1):75. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hierarchical attention networks for document classification. In NAACL.
2019
248
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2611–2618 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2611 Celebrity Profiling Matti Wiegmann1,2 Benno Stein1 Martin Potthast3 1Bauhaus-Universität Weimar 2German Aerospace Center 3Leipzig University <first>.<last>@[uni-weimar|dlr|uni-leipzig].de Abstract Celebrities are among the most prolific users of social media, promoting their personas and rallying followers. This activity is closely tied to genuine writing samples, which makes them worthy research subjects in many respects, not least profiling. With this paper we introduce the Webis Celebrity Corpus 2019. For its construction the Twitter feeds of 71,706 verified accounts have been carefully linked with their respective Wikidata items, crawling both. After cleansing, the resulting profiles contain an average of 29,968 words per profile and up to 239 pieces of personal information. A crossevaluation that checked the correct association of Twitter account and Wikidata item revealed an error rate of only 0.6%, rendering the profiles highly reliable. Our corpus comprises a wide cross-section of local and global celebrities, forming a unique combination of scale, profile comprehensiveness, and label reliability. We further establish the state of the art’s profiling performance by evaluating the winning approaches submitted to the PAN gender prediction tasks in a transfer learning experiment. They are only outperformed by our own deep learning approach, which we also use to exemplify celebrity occupation prediction for the first time. 1 Introduction Author profiling is about predicting personal traits of individual authors based on their writing style. Frequently studied traits are demographics such as gender, age, native language or dialect, and even personality. Applications of author profiling include marketing, social science, risk assessment, and forensics. Given the high expectations that are implied by these and similar applications, the creation of a valid automatic profiler for a given trait, let alone many, depends on the availability of carefully constructed corpora. Corpus construction for author profiling has always been difficult for lack of large-scale distant supervision sources that provide for genuine pieces of writing from many different authors alongside personal information. In part, the aforementioned selection of demographics that are frequently studied reflects the availability of corresponding ground truth. In this regard, one source of ground truth, available in large quantities, high diversity of traits, and near-perfect label reliability, has been overlooked: celebrities. The contributions of our research are threefold:1 First, in Section 2, we survey the state of the art in constructing author profiling corpora for the first time, compiling a taxonomy of construction strategies applied. Second, in Section 3, we report on the construction of the first large-scale corpus of celebrity profiles, describing our acquisition approach based on a reliable matching of Twitter accounts to Wikidata items. Third, in Section 4, we carry out a prediction experiment on the most widely studied trait, gender, comparing the performance of our own deep learning approach with that of the four best-performing ones submitted to the recent PAN author profiling competitions from 2015 to 2018. Moreover, we exemplify the prediction of celebrity occupations. 2 Related Work We analyzed 29 publications on author profiling the authors of which explicitly describe their data acquisition and corpus construction strategies. The strategies have been reviewed, abstracted, and mapped into a taxonomy, which in turn enabled us to identify specific quality criteria. Table 1 overviews these publications and reports key figures, personal traits, and the underlying acquisition strategy. Note that a large part of this research builds upon the pioneering works done 1Code and corpus: https://github.com/webis-de/ACL-19 2612 Dataset Genre Lang. Authors Words Personal Traits Label Acquisition Strategy Mikros (2013) Blogs 1 100 20,323 Gender AIS Nguyen et al. (2011) Blogs 1 1,997 27,303 Age AIS+U Rosenthal and McKeown (2011) Blogs 1 24,500 (?) Age AIS Schler et al. (2006) Blogs 1 37,478 7,885 Gender AIS PAN13 (2013) Blogs 2 346,100 632 Age, Gender AIS Wang et al. (2016) Sina Weibo 1 742,323 (?) Age, Education, Gender, Relationship AIS Burger et al. (2011) Tweets 12+ 183,729 283* Gender AIU MEX-A3T (2018) Tweets 1 5,000 17,195* Education, Residence AIU Gjurkovic and Snajder (2018) Comments 1 23,503 24,861 Personality (MBTI) AIU Plank and Hovy (2015) Tweets 1 1,500 12,880 Gender, Personality (MBTI) AIU Preotiuc-Pietro et al. (2015) Tweets 1 5,191 26,415* Occupation (SOC) AIU Ramos et al. (2018) Facebook 1 1,019 2,178 Age, Education, Gender, AIU Personality (Big Five), Religion PAN17 (2017a) Twitter 4 19,000 1,195 Dialect, Gender AIU Twisty (2016) Twitter 6 18,168 25,400 Gender, Personality (MBTI) AIU Preotiuc-Pietro et al. (2017) - D2 Tweets 1 13,651 23,717* Politics AIU TAT en (2007a) Emails 1 1,033 3,259 Age, Gender, Education, Native lang., ARS Personality (Big Five), Residence TAT ar (2007b) Emails 1 1,033 2,085 Age, Education, Gender, ARS Personality (MBTI) Fatima et al. (2017) Facebook 4 479 2,156 Age, Birthplace, Gender, Education, ARS Extroversion, Nat. lang., Occupation Litvinova et al. (2017) Essays 1 500 145 Age, Education, Gender, Personality ARS Preotiuc-Pietro and Ungar (2018) Tweets 1 4,098 16,785* Age, Education, Gender, Income, Race ARS PAN15 (2015) Tweets 4 1,070 1,205 Age, Gender, Personality (Big Five) ARS Tighe and Cheng (2018) Tweets 1 250 31,011* Personality (Big Five) ARS Clips CSI (2014) Essays 1 749 976 Age, Birthplace, Gender, ARS Personality (Big Five) Preotiuc-Pietro et al. (2017) - D1 Tweets 1 3,938 15,587* Age, Gender, Politics ARS Schwartz et al. (2013) Facebook 1 136,000 4,129 Age, Gender, Personality (NEO-PI-R) ARS Ciot et al. (2013) Tweets 4 8,618 12,700* Gender ORS Emmery et al. (2017) Tweets 1 6,610 31,750* Gender ORS Volkova and Bachrach (2015) Tweets 1 5,000 2,540 Age, Children, Education, Gender, ORS Income, Intelligence, Optimism, Political alignment, Ethnicity, Religion, Relationship, Satisfaction Kapociute-Dzikiene et al. (2015) Essays 1 186 286 Age, Gender OIS Bergsma et al. (2012) Papers 1 4,500 (?) Gender, Native language OIS Our work Tweets 37 71,706 29,968 up to 239 OIS Table 1: Survey of author profiling corpora. A * indicates an estimation based on an average of 12.7 words per tweet from the reported number of tweets and a ? unavailable information. Row groups reflect acquisition strategy. by Pennebaker et al. (2003), Koppel et al. (2002), Schler et al. (2006), and Argamon et al. (2009); recent works add novel traits, trait relations, multilingualism, and microblogs. The largest annual shared task on author profiling is part of the PAN competition (Rangel Pardo et al., 2013, 2014, 2015, 2016, 2017b, 2018). Profiling research related to aspects such as behavioral traits (Kumar et al., 2018), medical conditions (Choudhury et al., 2013), or native language identification (NLI) have been excluded from our survey, since these have developed into subfields of their own right. Three criteria describe the quality of the surveyed resources: the representativeness of the targeted population, the comprehensiveness in terms of author, text, and label size, and the reliability of label attributions. Table 2 shows our taxonomy of label acquisition strategies for reliability and comprehensiveness evaluation: labels provided by the author or by others (A/O), labels provided independently or on request (I/R), and labels reIndependent Requested Structured Unstructured Structured Author (AIS) (AIU) (ARS) Profile forms Posts, Comments Questionaires Others (OIS) (OIU) (ORS) Wikidata News, Mentions Crowdsourcing Table 2: Taxonomy of label acquisition strategies with common example applications. trieved in structured or unstructured form (S/U). The six resulting strategies, disregarding R-U combinations as inapplicable, describe the general strategy and hint possible issues: (1) subjectivity or misunderstandings by experts, volunteer annotators, or crowdsourcing workers versus deception and self-serving bias by author-self-reported labels, (2) self-selection bias and per-author cost in requested labels versus few and stale trait choices in independent reporting, and (3) imprecision, incompleteness, and misunderstandings in unstructured versus restricted choices in structured labeling. 2613 3 The Webis Celebrity Corpus This section introduces the Webis Celebrity Corpus 2019, detailing how we identified celebrities at scale, compiled a large corpus of their writing, and linked it with Wikidata to obtain personal profiles. A corpus analysis and validation follows. 3.1 Who is a Celebrity? To operationalize the term “celebrity”, we say that a person has a celebrity-like status, be it locally or globally, if he or she possesses a verified Twitter account, and at the same time, is deemed notable enough to be the subject of a Wikipedia article and a Wikidata item. Importantly, Twitter verifies “that an account of public interest is authentic” (Twitter, 2018), awarding a blue checkmark badge: . Notability at Wikipedia pertains to people who are “worthy of notice,” “remarkable,” or “famous or popular” (Wikipedia, 2018). While verified accounts also include organizations, and while most notable people at Wikipedia/Wikidata are not considered celebrities, it is their intersection which provides for a good approximation. To collect celebrity profiles at scale, we join these sources of information. 3.2 Corpus Construction We crawled all 297,878 verified Twitter accounts,2 and linked them with Wikidata items. This is a non-trivial task: a Twitter account name and its corresponding Wikidata item need not have an exact string match, and there may be false matches. Table 3a shows the six candidate names we obtained from the unique, static Twitter “@”-names and the free-form display names. Table 3b shows the linking results. Accounts were marked as human or not human based on Wikidata’s instance of property. In the sequence of name candidates I-VI, a human match was kept, even if successive candidates matched non-human items. If items differed between languages for the same candidate, matches were marked ambiguous. Matches containing one of the eight deathrelated Wikidata properties and a date of death before Twitter’s launch in March 2006 were marked memorial. All mismatches identified during our subsequent corpus validation were marked as error. After excluding matches with private timelines, 71,706 valid account-item matches remained. 2Official list: https://twitter.com/verified, retrieved May 2018 3.3 Corpus Validation A large ground truth for evaluating our TwitterWikidata matches is provided by Wikidata itself: 89,451 items about humans include a Twitter username; 28,454 of these usernames intersect with the 297,878 verified Twitter accounts we crawled. Comparing these 28,454 true matches with those obtained by our matching heuristic, we distinguish three cases: (1) 20,579 are linked correctly, (2) 124 are linked incorrectly (0.6% error rate), and (3) 7,751 are not linked (27.7% miss rate). Thus, our heuristic achieves a very high precision of 0.994 at a reasonably high recall of 0.723. Table 3b (bottom row group) breaks down the number of matches by type and name candidate. The most successful name candidate is I, yielding 92% of all matches, but only half the erroneous ones. Name candidates II, III, and VI contribute negligibly, while candidates IV and V provide only for 5% of the matches combined, but 45% of all errors. At an overall error rate of 0.6%, though, candidates IV and V produced 3,416 correct and only 56 incorrect matches, rendering them still viable. 3.4 Corpus Analysis The corpus we created contains 29,968 words on average per author and 1,523 different Wikidata properties, of which 239 are personal traits relevant for profiling. Table 4 shows a selection of those traits, the most common value and for how many celebrities they are annotated. The remaining properties split into 1,224 external references (i.e., links to other sites) and 60 miscellaneous properties (mostly internal references and multimedia data). Of the 239 traits, 45 are attributed to more than 1,000, and 5 to more than 55,000 users simultaneously. The extracted Wikidata properties are highly specific and frequently feature over 100 different values per property within our corpus, although most are Zipf-distributed and can easily be aggregated or reduced to smaller dimensions, as we will demonstrate with occupation in Section 4. It should be noted that labels, such as ethnicity, religion, and native language, are present mostly for minorities rather than the majority. We collected an average 2,181 tweets per celebrity and 156,411,899 tweets in total (≈3 billion words), covering 98.05% of all their tweets.3 Of all collected tweets, 29.3% are retweets and 20.9% 3Though Twitter allows for retrieving only the 3,200 most recent tweets per account, its total number of tweets is given. 2614 (a) Name candidate generation rule I only alphanumeric characters of the display name II reference name split at capitalization III reference name split at display name IV first and last part from I, split at spaces V all but the last part from I VI all but the last two parts from I (b) Celebrity Error Memorial Not hum. Ambig. all 71,706 124 2,666 60,232 896 I 91.8% 50.0% 70.4% 77.6% 82.6% II 2.8% 3.2% 2.6% 6.2% 1.8% III >.1% 0.0% 0.0% >.1% 0.0% IV 1.8% 23.3% 5.6% 3.8% 5.3% V 2.9% 21.8% 9.2% 10.6% 9.6% VI 0.3% 1.6% 12.3% 1.9% 0.8% (c) Dataset Authors Training Test PAN15 (2015) 152 142 PAN16 (2016) 428 78 PAN17 (2017b) 3,600 2,400 PAN18 (2018) 2,000 1,900 Celebrities 31,861 13,614 Table 3: (a) Rules to generate name candidates for Wikidata matching from Twitter reference and display names. (b) Evaluation of matching success as per generation rule. (c) Sizes of the datasets used for evaluation. Label Occurrences Most frequent value Sex 65,035 90.1% Male 71.7% Occupation 63,017 87.9% Actor 15.3% Date of birth 60,493 84.4% Educated at 28,134 39.2% Harvard 2.1% Sport 18,688 26.1% Football 30.8% Languages spoken 12,094 16.9% English 54.9% Political party 6,703 9.4% Republican 16.4% Genre 6,699 9.3% Pop Music 21.6% Race 3,531 0.5% African Am. 66.5% Religion 2,960 0.4% Islam 23.5% Table 4: Selection of relevant personal traits studied in the related work, how often they have been assigned in our corpus and the most frequent value for each label. replies. Of the 49.7% remaining tweets, an average of 989 (13,938 words) per celebrity are longer than 20 characters and do not contain links, yielding a conservative estimate of tweets amenable for style analysis. Although celebrities tweeted in 50 different languages, 77% of all timelines consisted of tweets exclusively written in English, followed by 7% in Spanish and 4% in French, while 2,104 celebrities tweeted at least bilingual. 3.5 Corpus Reliability and Limitations Regarding the representativeness of our sample from the population of celebrities, we may cautiously claim to have obtained a wide cross-section of people of elevated status. However, celebrities are excluded who do not use Twitter, whose account is not verified (which is exceedingly unlikely, the more famous they are), or who have no Wikipedia article about themselves. There are no reliable estimates of the true number of celebrities worldwide, but it is safe to assume that our corpus has a bias towards Western culture, and particularly English-speaking celebrities. Regarding profile comprehensiveness, our corpus provides for comparably long samples of writing per author and a rich set of traits, albeit many traits are available only for a subset of profiles. Most celebrities provide genuine writing samples of themselves at Twitter, but some employ public relations staff to manage their account. Though a problem for generic author profiling, this does not impede celebrity profiling. Celebrities craft public personas as their own unique brands. If a celebrity decides to employ staff to do so, approving their impersonations, these personas are no less genuine and normative than personally crafted personas. The information about the traits of celebrities obtained from Wikidata can be considered highly reliable. Dedicated volunteers collect all kinds of personal information about celebrities, which are often referenced and under constant review by other Wikipedia and Wikidata editors. As per our taxonomy of label acquisition strategies in Table 2, we employ an OIS strategy: we obtain labels from third-party expert annotators (O), who are independent (I), supplying data in structured form (S). 4 Evaluation To investigate the usefulness of our corpus for author profiling, we carry out a first large-scale profiling experiment by predicting celebrity occupation and gender and evaluating four state of the art approaches that won the PAN 2015-2018 author profiling competitions. Instead of retraining their prediction models, we use the models for gender inference as they have been trained on the PAN training datasets provided to participants of the respective years. Additionally, we train our own baseline gender model on celebrity profiles. Gender is a suitable benchmark trait that is frequently studied in the related work and a recurring trait prediction task at PAN. We observe a successful model transfer, thus mutually corroborating that ours and the PAN corpora capture the same underlying concept of gender. 4.1 Preprocessing and Baselines For our experiments, we extracted a subset of 45,475 English-speaking profiles from our corpus with the traits gender and occupation and split it 70/30 into training and test sets. Table 3c shows 2615 Model PAN15 PAN16 PAN17 PAN18 Celeb alvarezcamona15 (2015) 0.859 – – – 0.723 nissim16 (2016) – 0.641 – – 0.740 nissim17 (2017) – – 0.823 – 0.855 danehsvar18 (2018) – – – 0.822 0.817 CNN (Celeb) 0.747 0.590 0.747 0.756 0.861 CNN (Celeb + PAN15) 0.793 – – – – CNN (Celeb + PAN16) – 0.690 – – – CNN (Celeb + PAN17) – – 0.768 – – CNN (Celeb + PAN18) – – – 0.759 – Table 5: Accuracy of (top) the state of the art gender prediction approaches on their respective datasets and transfer performance to celebrities, and (bottom) our baseline deep learning approach, with and without retraining on the PAN datasets. this dataset in comparison to the PAN datasets. Our subset has 1,379 different occupations annotated, which we manually assigned to eight groups: sports, performer, creator, politics, manager, science, professional, and religious. We preprocessed the text by lowercasing, replacing mentions with <user>, hashtags with <hashtag>, hyperlinks with <url>, number-groups with <numbers>, the most frequent emoticons with <smiley>, and we removed all punctuation sequences beyond basic English punctuation marks. As baseline models for gender and for occupation prediction, we adapted the convolutional neural network (CNN) for text classification introduced by Kim (2014). Our variant of this model builds on the 100-dimensional GloVe (Pennington et al., 2014) Twitter embeddings, uses four parallel 1D-convolution layers with 128 filters each for 1-, 2-, 3-, and 4-grams, a 64-node dense layer for concatenation after the convolutions, and a final classification layer. The models for occupation and gender only differ in the last classification layer and loss function used to facilitate binary (gender) and categorical truth (occupation). We limited the vocabulary to the most common 100,000 words and padded the word-sequence for each author to 5000 words, which is roughly the average per author word count between ours and the PAN datasets. In our tests on the celebrity profiles, this hyperparameter setting achieves more consistent results than fewer or shorter n-gram filters, smaller dense layers, shorter or longer sequence length, or a larger vocabulary. Note that our corpus has labels for more than the two sexes male and female, however, the PAN data did not, so that we excluded profiles with other genders from our experiments, leaving their investigation for future work. 4.2 Evaluation Results Table 5 shows all models’ transfer performance between populations on gender. In general, all models generalize well to the respectively unseen datasets but perform best on the data they have been specifically trained for. The largest difference can be observed on the sub-1,000 author dataset PAN15, where the model of Álvarez-Carmona et al. (2015) suffers a significant performance loss, and PAN16, where the model of Busger op Vollenbroek et al. (2016) performs notably better on the celebrity data. This was a surprise to us that may be explained by the longer samples of writing per profile in our corpus. This hypothesis is also supported by the large increase in accuracy of the baseline model after retraining for two epochs with the PAN15 and PAN16 training datasets, respectively. The occupation model achieved a 0.7111 accuracy. Altogether, the results of our experiments show that profiling models trained on a random choice of people generalize to celebrities, and vice versa. Our corpus can hence be used for generic author profiling, while providing significantly richer profiles in terms of writing samples and as of yet unexplored personal traits. The scale of our corpus allows for the training of deep learning models, which, at least on our corpus, outperform the state of the art. We expect that further fine-tuning of the model architecture will yield significant improvements. 5 Conclusion This paper introduces the Webis Celebrity Corpus 2019, the first corpus of its kind comprising a total of 71,706 celebrity profiles, 239 profilingrelevant labels, and 3 billion words. Its quality is due to Twitter’s verification process, Wikidata’s accuracy, and our low-error linking strategy between the two sites. Its generalizability qualities for gender prediction have been demonstrated using state-of-the-art approaches. Our corpus formed the basis for the first celebrity profiling competition, organized as part of the PAN evaluation lab (Wiegmann et al., 2019). The traits studied were the degree of fame, occupation, age, and gender, introducing fame and occupations as novel, celebrity-specific profiling traits, and revisiting the well-known traits age and gender. In future work, we plan on improving the corpus by incorporating verified accounts from other social networks, and, by inferring new labels for as of yet unlabeled celebrities through link prediction. 2616 References Miguel A. Álvarez-Carmona, A. Pastor López-Monroy, Manuel Montes y Gómez, Luis Villaseñor-Pineda, and Hugo Jair Escalante. 2015. INAOE’s participation at PAN’15: Author Profiling Task—Notebook for PAN at CLEF 2015. In (Cappellato et al., 2015). Shlomo Argamon, Moshe Koppel, James W. Pennebaker, and Jonathan Schler. 2009. Automatically Profiling the Author of an Anonymous Text. Commun. ACM, 52(2):119–123. Krisztian Balog, Linda Cappellato, Nicola Ferro, and Craig Macdonald, editors. 2016. CLEF 2016 Evaluation Labs and Workshop – Working Notes Papers, 5-8 September, Évora, Portugal, CEUR Workshop Proceedings. CEUR-WS.org. Angelo Basile, Gareth Dwyer, Maria Medvedeva, Josine Rawee, Hessel Haagsma, and Malvina Nissim. 2017. N-GrAM: New Groningen Author-profiling Model—Notebook for PAN at CLEF 2017. In (Cappellato et al., 2017). Shane Bergsma, Matt Post, and David Yarowsky. 2012. Stylometric Analysis of Scientific Articles. In HLT-NAACL. The Association for Computational Linguistics. John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating Gender on Twitter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. ACM. Mart Busger op Vollenbroek, Talvany Carlotto, Tim Kreutz, Maria Medvedeva, Chris Pool, Johannes Bjerva, Hessel Haagsma, and Malvina Nissim. 2016. GronUP: Groningen User Profiling—Notebook for PAN at CLEF 2016. In (Balog et al., 2016). Linda Cappellato, Nicola Ferro, Lorraine Goeuriot, and Thomas Mandl, editors. 2017. CLEF 2017 Evaluation Labs and Workshop – Working Notes Papers, 11-14 September, Dublin, Ireland, CEUR Workshop Proceedings. CEUR-WS.org. Linda Cappellato, Nicola Ferro, Gareth Jones, and Eric San Juan, editors. 2015. CLEF 2015 Evaluation Labs and Workshop – Working Notes Papers, 8-11 September, Toulouse, France, CEUR Workshop Proceedings. CEUR-WS.org. Linda Cappellato, Nicola Ferro, Jian-Yun Nie, and Laure Soulier, editors. 2018. CLEF 2018 Evaluation Labs and Workshop – Working Notes Papers, 11-14 September, Avignon, France, CEUR Workshop Proceedings. CEUR-WS.org. Miguel Ángel Álvarez Carmona, Estefanía Guzmán-Falcón, Manuel Montes-y-Gómez, Hugo Jair Escalante, Luis Villaseñor Pineda, Verónica Reyes-Meza, and Antonio Rico Sulayes. 2018. Overview of MEX-A3T at Ibereval 2018: Authorship and Aggressiveness Analysis in Mexican Spanish Tweets. In IberEval@SEPLN, volume 2150 of CEUR Workshop Proceedings. CEUR-WS.org. Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting Depression via Social Media. In ICWSM. The AAAI Press. Morgane Ciot, Morgan Sonderegger, and Derek Ruths. 2013. Gender Inference of Twitter Users in Non-English Contexts. In EMNLP. ACL. Saman Daneshvar and Diana Inkpen. 2018. Gender Identification in Twitter using N-grams and LSA—Notebook for PAN at CLEF 2018. In (Cappellato et al., 2018). Chris Emmery, Grzegorz Chrupala, and Walter Daelemans. 2017. Simple Queries as Distant Labels for Predicting Gender on Twitter. In NUT@EMNLP. Association for Computational Linguistics. Dominique Estival, Tanja Gaustad, Son Pham, Will Radford, and Ben Hutchinson. 2007a. Author profiling for English Emails. Dominique Estival, Tanja Gaustad, Son Bao Pham, Will Radford, and Ben Hutchinson. 2007b. TAT: An Author Profiling Tool with Application to Arabic Emails. In ALTA. Australasian Language Technology Association. Mehwish Fatima, Komal Hasan, Saba Anwar, and Rao Muhammad Adeel Nawab. 2017. Multilingual Author Profiling on Facebook. Inf. Process. Manage., 53(4). Matej Gjurkovic and Jan Snajder. 2018. Reddit: A Gold Mine for Personality Prediction. In PEOPLES@NAACL-HTL. Association for Computational Linguistics. Jurgita Kapociute-Dzikiene, Andrius Utka, and Ligita Sarkute. 2015. Authorship Attribution and Author Profiling of Lithuanian Literary Texts. In BSNLP@RANLP. INCOMA Ltd. Shoumen, BULGARIA. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In EMNLP. ACL. Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically Categorizing Written Texts by Author Gender. Literary and Linguistic Computing, 17(4). Ritesh Kumar, Aishwarya N. Reganti, Akshit Bhatia, and Tushar Maheshwari. 2018. Aggression-Annotated Corpus of Hindi-English Code-Mixed Data. In LREC. European Language Resources Association (ELRA). 2617 Tatiana Litvinova, Pavel Seredin, Olga Litvinova, and Olga Zagorovskaya. 2017. Differences in Type-Token Ratio and Part-of-Speech Frequencies in Male and Female Russian Written Texts. In Proceedings of the Workshop on Stylistic Variation. Association for Computational Linguistics. George Mikros. 2013. Authorship Attribution and Gender Identification in Greek Blogs. In Selected papers of the VIIIth International Conference on Quantitative Linguistics (QUALICO). Dong Nguyen, Noah A. Smith, and Carolyn P. Rosé. 2011. Author Age Prediction from Text Using Linear Regression. In Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities. ACM. James W. Pennebaker, Matthias R. Mehl, and Kate G. Niederhoffer. 2003. Psychological Aspects of Natural Language Use: Our Words, our Selves. Annual Review of Psychology, 54:547–577. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP). Barbara Plank and Dirk Hovy. 2015. Personality Traits on Twitter - or - How to get 1,500 Personality Tests in a Week. In WASSA@EMNLP. The Association for Computer Linguistics. Daniel Preotiuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015. An Analysis of the User Occupational Class through Twitter Content. In ACL (1). The Association for Computer Linguistics. Daniel Preotiuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle H. Ungar. 2017. Beyond Binary Labels: Political Ideology Prediction of Twitter Users. In ACL (1). Association for Computational Linguistics. Daniel Preotiuc-Pietro and Lyle H. Ungar. 2018. User-level Race and Ethnicity Predictors from Twitter Text. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018. Association for Computational Linguistics. Ricelli Ramos, Georges Neto, Barbara Barbosa Claudino Silva, Danielle Sampaio Monteiro, Ivandré Paraboni, and Rafael Dias. 2018. Building a Corpus for Personality-Dependent Natural Language Understanding and Generation. In LREC. European Language Resources Association (ELRA). Francisco Manuel Rangel Pardo, Fabio Celli, Paolo Rosso, Martin Potthast, Benno Stein, and Walter Daelemans. 2015. Overview of the 3rd Author Profiling Task at PAN 2015. In (Cappellato et al., 2015). Francisco Manuel Rangel Pardo, Manuel Montes-y-Gómez, Martin Potthast, and Benno Stein. 2018. Overview of the 6th Author Profiling Task at PAN 2018: Cross-domain Authorship Attribution and Style Change Detection. In (Cappellato et al., 2018). Francisco Manuel Rangel Pardo, Paolo Rosso, Irina Chugur, Martin Potthast, Martin Trenkmann, Benno Stein, Ben Verhoeven, and Walter Daelemans. 2014. Overview of the 2nd Author Profiling Task at PAN 2014. In CLEF 2014 Evaluation Labs and Workshop – Working Notes Papers, 15-18 September, Sheffield, UK, CEUR Workshop Proceedings. CEUR-WS.org. Francisco Manuel Rangel Pardo, Paolo Rosso, Moshe Koppel, Efstathios Stamatatos, and Giacomo Inches. 2013. Overview of the Author Profiling Task at PAN 2013. In CLEF 2013 Evaluation Labs and Workshop – Working Notes Papers, 23-26 September, Valencia, Spain. CEUR-WS.org. Francisco Manuel Rangel Pardo, Paolo Rosso, Martin Potthast, and Benno Stein. 2017a. Overview of the 5th Author Profiling Task at PAN 2017: Gender and Language Variety Identification in Twitter. In Working Notes Papers of the CLEF 2017 Evaluation Labs, volume 1866 of CEUR Workshop Proceedings. CLEF and CEUR-WS.org. Francisco Manuel Rangel Pardo, Paolo Rosso, Martin Potthast, and Benno Stein. 2017b. Overview of the 5th Author Profiling Task at PAN 2017: Gender and Language Variety Identification in Twitter. In (Cappellato et al., 2017). Francisco Manuel Rangel Pardo, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. 2016. Overview of the 4th Author Profiling Task at PAN 2016: Cross-Genre Evaluations. In (Balog et al., 2016). Sara Rosenthal and Kathleen R. McKeown. 2011. Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in Pre- and Post-Social Media Generations. In ACL. The Association for Computer Linguistics. Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W. Pennebaker. 2006. Effects of Age and Gender on Blogging. In AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs. AAAI. H. Andrew Schwartz, Johannes C. Eichstaedt, Margaret L. Kern, Lukasz Dziurzynski, Stephanie M. Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin E. P. Seligman, and Lyle H. Ungar. 2013. Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach. In PLoS ONE, page 8(9): e73791. 2618 Edward P. Tighe and Charibeth K. Cheng. 2018. Modeling Personality Traits of Filipino Twitter Users. In PEOPLES@NAACL-HTL. Association for Computational Linguistics. Twitter. 2018. FAQ: About verified accounts. https://help.twitter.com/en/managing-your-account/ about-twitter-verified-accounts, accessed 15.11.2018. Ben Verhoeven and Walter Daelemans. 2014. Clips Stylometry Investigation (CSI) Corpus: A Dutch Corpus for the Detection of Age, Gender, Personality, Sentiment and Deception in Text. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, May 26-31, 2014.. European Language Resources Association (ELRA). Ben Verhoeven, Walter Daelemans, and Barbara Plank. 2016. Twisty: A Multilingual Twitter Stylometry Corpus for Gender and Personality Profiling. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portorož, Slovenia, May 23-28, 2016. European Language Resources Association (ELRA). Svitlana Volkova and Yoram Bachrach. 2015. On Predicting Sociodemographic Traits and Emotions from Communications in Social Networks and their Implications to Online Self-Disclosure. Cyberpsy., Behavior, and Soc. Networking, 18(12):726–736. Yuan Wang, Yang Xiao, Chao Ma, and Zhen Xiao. 2016. Improving Users’ Demographic Prediction via the Videos they Talk about. In EMNLP. The Association for Computational Linguistics. Matti Wiegmann, Benno Stein, and Martin Potthast. 2019. Overview of the Celebrity Profiling Task at PAN 2019. In CLEF 2019 Labs and Workshops, Notebook Papers, CEUR Workshop Proceedings. CEUR-WS.org. Wikipedia. 2018. Notability Guidelines for People. https://en.wikipedia.org/wiki/Wikipedia: Notability_(people), accessed 15.11.2018.
2019
249
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 252–262 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 252 Spatial Aggregation Facilitates Discovery of Spatial Topics Aniruddha Maiti Temple University Philadelphia, PA-19122, USA [email protected] Slobodan Vucetic Temple University Philadelphia, PA-19122, USA [email protected] Abstract Spatial aggregation refers to merging of documents created at the same spatial location. We show that by spatial aggregation of a large collection of documents and applying a traditional topic discovery algorithm on the aggregated data we can efficiently discover spatially distinct topics. By looking at topic discovery through matrix factorization lenses we show that spatial aggregation allows low rank approximation of the original document-word matrix, in which spatially distinct topics are preserved and non-spatial topics are aggregated into a single topic. Our experiments on synthetic data confirm this observation. Our experiments on 4.7 million tweets collected during the Sandy Hurricane in 2012 show that spatial and temporal aggregation allows rapid discovery of relevant spatial and temporal topics during that period. Our work indicates that different forms of document aggregation might be effective in rapid discovery of various types of distinct topics from large collections of documents. 1 Introduction Social microblogging sites such as Twitter generate large volumes of short documents through the activity of hundreds of millions of users around the world. This provides an unprecedented access to the pulse of the global society. Due to the sheer volume and diversity of the generated content, topic discovery has been an invaluable tool in an effort to make sense of this data. Regardless of a precise definition of a topic and a particular topic model, topics discovery is used to describe pertinent themes in a document corpus and serve to identify events, trends, and interests at the global, local, or a social group level. Among the most popular topic modeling techniques are Latent Dirichlet Allocation (LDA), Latent Semantic Analysis (LSA), and Non-negative Matrix factorization (NMF). When applying those techniques for topic discovery from microblogs, there are three main challenges: (1) how to improve computational speed, (2) how to extract useful topics, and (3) how to deal with short texts. Many papers were published that address one or more of these challenges and most of them propose to modify the original topic models. In this paper, we are focusing on aggregation (also referred to as pooling) (Alvarez-Melis and Saveski, 2016) (Hong and Davison, 2010) (Weng et al., 2010) (Steinskog et al., 2017), a particular document preprocessing technique that has been empirically shown to be useful for topic discovery from microblogs. The main idea of aggregation is to combine multiple documents into a single document according to some external criterion and to apply a topic discovery algorithm on the aggregated documents. The earliest mentions of aggregation (Mehrotra et al., 2013) (Hong and Davison, 2010)(Weng et al., 2010) are motivated by the difficulty when applying NMF and LDA to very short text documents (Hong and Davison, 2010). This difficulty in finding useful topics is often attributed to the sparseness of the document-word matrix (Yan et al., 2013) (Cheng et al., 2014), which fails to provide confident counts of word cooccurrence and information about the shared context (Phan et al., 2008). Microblogs often come with metadata such as hashtags, author name, time stamp, or location. By aggregating the microblogs according to such metadata, the intuition is that the resulting aggregated documents contain a sufficient number of words for topic modeling schemes to identify meaningful topics. In addition, the authors of those early papers observe that aggregating microblogs that are similar in some sense (semantically, temporally) enriches the content present in a single document and results in better topics (Mehrotra et al., 2013). Finally, due 253 (a) Common Topic (b) Distinct Topic Figure 1: Examples of common and distinct topics. a) Common topic: a work-related topic. b) Distinct temporal topic: presidential debate to reduction in a number of documents, aggregation also leads to computational savings. While aggregation has received interest in the research community and there are several empirical studies illustrating its benefits, we are not aware of a study that manages to provide, beyond brief intuitive arguments, an insight into why aggregation works and what are its advantages and limitations. In this paper we attempt to provide such an insight from the perspective of discovering spatially specific topics. As will be evident, our insights extend to other means of aggregation. Our argument will be given in the context of matrix factorization, where a document-word matrix X is represented as a product W · H, where j-th row of matrix H represents word distribution in j-th topic and i-th row of matrix W represents a distribution of topics in i-th document. We adopt the terminology from (Kim et al., 2015), which distinguishes between common and distinct topics (see Figure 1), where distribution of common topics within the corpus is not impacted by the aggregation metadata such as location, time, or author of a microblog, and distribution of distinct topics is correlated with the metadata. We show that factorization of the aggregated matrix Xa, obtained by merging documents based on metadata (e.g., location), allows its low rank approximation as Wa · Ha, where the resulting topic matrix Ha retains the distinct topics from H (e.g., spatial topics) and where the common topics from H are merged into a single topic in Ha. We will show empirical results confirming this observation both on synthetic and real-life data. In particular, we will demonstrate this behavior in case of spatial and temporal aggregation. The main contribution of this paper is in demonstrating that applying standard topic discovery algorithms such as NMF and LDA on aggregated documents results in discovery of topics related to the aggregation method. Moreover, since the aggregated matrix Xa can be orders of magnitude smaller than the original matrix X, the computational cost can also be reduced by orders of magnitude. Finally, as observed in the previous work, aggregation also alleviates the problem of sparsity when discovering topics in microblogs. 2 Related Work Topic modeling from microblogs has a vast amount of literature (Steiger et al., 2015). Early work includes using NMF on term correlation matrix (Yan et al., 2013) and ncut-weighted NMF (Yan et al., 2012). Recent work includes NMijF (Nugroho et al., 2017), which takes into account tweet-to-tweet interactions. Location recommendation model based on topic modeling was proposed in (Hu et al., 2013). NMF is used in DiscNMF (Kim et al., 2015) and STExNMF (Shin et al., 2017) to identify spatio-temporal topics. Pairfac (Wen et al., 2016) employs tensor decomposition accounting for location, time, and venue. In TopicOnTiles (Choi et al., 2018), the entire space-time is divided into small tiles and NMF is performed on each tile separately. LDA (Blei et al., 2003) has also been used for topic detection. In (Zhao et al., 2011), LDA is used to categorize and summarize tweets. In (Weng et al., 2010), LDA is used to find influential users in Twitter. Traditional topic modeling techniques such as LDA, LSA, and NMF are sensitive to sparsity (Hong and Davison, 2010). Different types of document aggregation schemes have been suggested to overcome this issue (Alvarez-Melis and Saveski, 2016). One example of an aggregation scheme is the author-topic model (Weng et al., 2010), in which multiple tweets from the same user are aggregated to construct documents representative of the user. In (Hong and Davison, 2010), it was observed that document aggregation endows the resulting dataset with interesting properties, where aggregation based on authors has been reported to produce topics which are different from topics discovered on non-aggregated dataset. User level aggregation was also found to be useful in related papers (Giorgi et al., 2018). Similar results were also observed for aggregation based on hashtags (Steinskog et al., 2017). These papers did not attempt to explain the mechanism behind changes in the discovered topics and this is where our current paper makes a contribution. 254 3 Methodology 3.1 Problem Setup Let us assume we are given a corpus of documents D = {d1, d2, d3, .., dN}, where N is the total number of documents. Let V be the vocabulary of unique words in the corpus. By using the bag of words representation, the corpus can be represented by a document-word matrix X of dimension N × V , where element Xi,j is the count of j-th word in i-th document. We will also assume that each document di is associated with a time stamp t(di) ∈1, ...T, where T is the number of time steps, and location l(di) ∈1, ...L, where L is the number of locations. We will make an assumption that there are K topics t1, ...tK, where topic tk defines probability that word wj will be generated by the topic as p(wj|tk), and that each document in a corpus is represented by a single topic. Our simplifying assumption that each document is generated by a single topic is acceptable when dealing with short documents such as microblogs. In addition, it will make it easier to describe the main effect of document aggregation. Among the K topics, we will assume that the first Kd are spatially distinct topics and the second Kc are common topics. For common topics, the probability or their occurrence does not depend on location of the document. In other words, p(tk|l) = p(tk), where l is location. Conversely, for spatially distinct topics, the probability of their occurrence is dependent on the document location. We illustrate such a setup in Figure 2, where there are 4 spatially distinct topics generated within 4 different circular regions and 2 common topics occurring equally likely over the whole square region. In this example, the probability that a distinct topic is generated within its assigned circle is constant and is zero outside. Given D, the objective is to find the distinct topics. In the following we will argue that document aggregation enables computationally efficient discovery of the distinct topics. 3.2 Effect of Spatial Aggregation on Rank In this section we will explain why spatial aggregation of documents facilitates discovery of spatially distinct topics. If we select a subset Xk of all documents from X generated by topic tk, the best rank-1 approximation of Xk is proportional to nk · hk, where nk is a column vector of length N whose i-th element is the sum of all words in i-th document and hk is a row vector of length V whose j-th element hkj equals p(wj|tk). Let us denote this rank-1 approximation as Xk1. If we sort the document-word matrix X by topics, we can approximate it by vertically concatenating rank-1 matrices Xk1. The rank of the resulting matrix X1 will be less than or equal to J. We observe that the rank of matrix X can be as high as V >> J and that matrix factorization of X into product W · H cannot guarantee successful topic discovery. On the other hand, we observe that factorization of X1 can easily result in discovery of the underlying J topics. Unfortunately, generating matrix X1 is as difficult as the topic discovery problem itself. We argue in the following that aggregation based on location results in generation of a matrix closely related to X1. As such, we demonstrate that spatial aggregation is very useful for discovery of spatially distinct topics. Let us define binary matrix Q with L rows and N columns as spatial aggregation matrix which merges the N original documents into L aggregated documents, where Ql,i = 1 if document xi belongs to l-th location and Ql,i = 0 otherwise. We construct the aggregated document-word matrix of size L × V as Xa = Q · X. The expected value of l-th row of matrix Xa equals: E(Xa l ) = X k(nlk · hk), (1) where, nlk is a scalar equal to the number of words generated from topic tk in documents from l-th location and hk is a row vector defined in the first paragraph of this subsection. If the number of documents at l-th location is large, the observed Xa l will be close to E(Xa l ). Since based on equation (1) each row of Xa can be approximated as the linear combination of K topic vectors hk, it follows that matrix Xa is approximately of rank K or less. We can thus closely approximate Xa as product W a · Ha, where k-th row of matrix Ha equals hk and (l, k)-th element of matrix W a equals nlk. We will now show that W a · Ha has rank lower than K. Since the Kc common topics are assumed to be location independent, the number of documents generated by k-th common topic is approximately the same at every location. Thus, we can approximate nlk = nk for each of the Kc common topics. Therefore, the last Kc columns of matrix W a are constant. As a result, the rank of matrix 255 W a · Ha is Kd + 1 or less, where the Kc common topics increase the rank by only one. As a result, we can replace the last Kc columns of W a with a single column equal to the sum of the last Kc columns of W a and replace the last Kc rows of Ha with a single row equal to the sum of the last Kc rows of Ha. The resulting topic matrix Ha is of dimension (Kd + 1) × V , where the last row is a sum of word probabilities over all common topics, while the first Kd rows are reserved for each of the Kd spatially distinct topics. This is a significant result showing that spatial aggregation facilitates discovery of spatially distinct topics while it collapses all documents generated by the common topics into a matrix that can be closely approximated by a rank-1 matrix. 3.3 NMF and LDA on Aggregated Data In the previous section we did not specify a particular algorithm for matrix factorization and topic discovery. NMF is a popular matrix factorization algorithm for nonnegative matrices such as document-word matrices. NMF finds nonnegative and sparse matrices W and H whose product approximates the original matrix. It solves the following optimization problem: F(W, H) = 1 2||X −W ·H||2 Fro+α·ρ·||W||1+ α · ρ · ||H||1 + 1 2α(1 −ρ) · ||W||2 Fro+ 1 2α(1 −ρ) · ||H||2 Fro. (2) Here, the Frobenius norm of a matrix A is denoted by ||A||Fro and α and ρ are regularization parameters. Rows of W of size N ×K represent the topic mixture within a particular document where K is the number of topics. Rows of H of size K × V represent the word distribution within a particular topic. The NMF optimization problem is typically solved iteratively and the algorithm becomes expensive for large data sets. NMF is also sensitive on collections of short documents such as microblogs. NMF favors commonly occurring topics and commonly ocurring words, which makes finding rare spatially distinct topics very difficult. Document aggregation based on metadata such as location directly addresses the aforementioned NMF issues. The arguments in the previous sections demonstrate the benefit of aggregation through matrix Figure 2: Spatially distinct topics on simulated data factorization. However, our assumptions made in 3.1 closely resemble the generating process used in LDA, where each document is a mixture over latent topics, and each topic is characterized by a distribution over words. From the corpus, LDA learns the topic distribution over documents and word distribution over topics. While, in theory, LDA should be able to discover topics directly from the original matrix X, it suffers from the same shortcomings as NMF: it is slow, fragile, and sensitive to sparse documents. As will be demonstrated in the experiments, document aggregation has very similar effects on both NMF and LDA. To summarize, the resulting distinct topic discovery procedure has the following steps: 1. Construct document-word matrix X. 2. Construct spatial aggregation matrix Q from metadata. 3. Perform NMF on aggregated matrix Q · X to find spatially distinct topics. If we wish to identify spatial-temporal topics, we may additionally aggregate the data based on time. First, the entire time span can be divided into smaller intervals. Then, all documents in each space-time cell are aggregated into a single document. Although we do not show it in our experiments, our major insight about the effect of document aggregation extends to other forms of aggregation such as author- or hashtag-based. 4 Experiments on Simulated Dataset In this section, we use synthetic data to study the effect of document aggregation on topic discovery. Following the setup provided in Section 3.1, we created a dataset using a simplistic generative model. Words in each document in the dataset are generated from two common topics (C1 and C2) 256 1 2 3 4 5 NMF on original dataset 0 20 40 60 80 100 120 Count of words in NMF Topics C1 C2 D1 D2 D3 D4 1 2 3 4 5 NMF on aggregated dataset 0 20 40 60 80 100 120 Count of words in NMF Topics C1 C2 D1 D2 D3 D4 Figure 3: Five topics discovered by NMF on nonaggregated and aggregated documents and four spatially distinct topics (D1, D2, D3 and D4). Each common and distinct topic uses a vocabulary with 100 words. Each document is associated with a single topic. To generate a document, a topic is chosen first, then 10 words are sampled randomly from the 100 words associated with that particular topic. Documents generated from the common topics are distributed randomly within the square. For each distinct topic, a circular region is defined within the square and the documents associated with that topic are placed by uniformly sampling within the circle. The placement of the circular regions is shown in Figure2. A total of 10, 000 documents are generated for each common topic and 1, 000 documents for each spatially distinct topic. We call this dataset the non-aggregated dataset. To demonstrate how aggregation affects the topic discovery, we divided the entire region in 4 × 4 small squares. Then we merged all the documents from each small square into a single aggregated document. In this way, we constructed 16 aggregated documents. We call this dataset the aggregated dataset. NMF set to find 5 topics was applied to the nonaggregated and the aggregated datasets. In Figure 3, we show the distribution of words in each of the 5 identified topic. For example, the first bin in the left subplot shows that discovered topic 1 has 91 unique words, all belonging to common topic C1. On the other hand, the first bin in the right subplot shows that discovered topic 1 has 100 unique words, 38 belonging to common topic C1 and 58 to common topic C2. We can see that none of the spatially distinct topics are discovered when we apply NMF on the non-aggregated data. All five identified topics contain words from the 2 common topics. On the other hand, in the aggregated dataset, the first identified topic contains a mixture of words from the 2 common topics, while the remaining 4 are almost entirely comprised of words from the 4 spatially distinct topics. This result ex1 2 3 4 5 6 7 8 9 10 NMF on original dataset 0 20 40 60 80 100 Count of words in NMF Topics C1 C2 D1 D2 D3 D4 1 2 3 4 5 6 7 8 9 10 NMF on aggregated dataset 0 20 40 60 80 100 120 Count of words in NMF Topics C1 C2 D1 D2 D3 D4 Figure 4: Ten topics identified by NMF on nonaggregated and aggregated documents 1 2 3 4 5 NMF on original dataset 0 20 40 60 80 100 120 Count of words in NMF Topics C1 C2 D1 D2 D3 D4 1 2 3 4 5 NMF on aggregated dataset 0 20 40 60 80 100 120 Count of words in NMF Topics C1 C2 D1 D2 D3 D4 Figure 5: Five topics identified by NMF on original and aggregated data using a smaller set of documents perimentally supports our insight about the impact of spatial aggregation presented in section 3.2. 4.1 Effect of Number of Topics in NMF We repeated the NMF experiment, but this time we set the number of NMF topics to 10. We can see from Figure 4 that all 10 topics found on the nonaggregated data are still one of the two common topics. On the other hand, after applying NMF on the aggregated data, 4 of the discovered topics directly correspond to the 4 spatially distinct topics, while the remaining 6 discovered topics are a mixture of the 2 common topics. 4.2 Effect of Number of Documents We repeated the experiments on a smaller corpus to see its effect on topic discovery. We generated 1, 000 documents for each common topic and 150 documents for each distinct topic. The result is summarized in Figure 5. As compared to Figure 3, we can see a slight deterioration of the quality of discovered spatially distinct topics from the aggregated data. In particular, all of the 4 discovered spatial topics are corrupted with more words from the common topics, which is particularly visible from the rightmost bin containing and an almost equal mixture of words from topics D1, C1, and C2. We observe that topic D1 corresponds to the largest circle. 4.3 Effect of Grid Density We repeated the previous experiment on the smaller dataset with 1, 000 documents for each 257 1 2 3 4 5 NMF on original dataset 0 20 40 60 80 100 120 Count of words in NMF Topics C1 C2 D1 D2 D3 D4 1 2 3 4 5 NMF on aggregated dataset 0 20 40 60 80 100 120 Count of words in NMF Topics C1 C2 D1 D2 D3 D4 Figure 6: Five topics identified by NMF using dense spatial grid 64 × 64 common topic and 150 documents for each spatially distinct topic, but this time with gradually increasing aggregation density. In Figure 6 we show results of applying NMF set to discover 5 topics for the spatial aggregation scheme with a grid size 64 × 64. As expected, the results look more similar to topic discovery from the non-aggregated dataset. Interestingly, despite the vary coarse aggregation (many spatial blocks were empty or with a single document), we still discovered topics D3 and D4, which correspond to the smaller circles. 5 Experiments on Real Life Data Identifying spatially distinct topics in a real life dataset is a challenging task. As we will demonstrate, we found that the aggregation scheme is quite successful in identifying distinct topics. We performed our experiments on Hurricane Sandy Twitter corpus downloaded through Twitter search API1 using the tweet IDs released in (Wang et al., 2015). The downloaded corpus contains 4.7 million tweets that temporally span 12 days surrounding the Hurricane Sandy and a few other distinguishable events between October 22nd, 2012 and November 2nd, 2012. Every tweet in the dataset is also geotagged to one of 13 states along the East Coast of the U.S. During preprocessing we transformed all characters to lowercase and removed stopwords and special characters. We also excluded repetitive letters that convey enthusiasm (e.g., birthdayy, birthdayyy, birthdayyyy). Finally, TF-IDF document-word matrix is constructed using the 20, 000 most frequent words in the corpus. Since the spatial distribution of tweets is highly imbalanced, we decided not to use a regular spatial grid. Instead, we employed k-means clustering on the latitude and longitude information for each tweet to identify 200 cluster centers in space. Each tweet is assigned to its nearest cluster center for spatial aggregation. Figure 7 shows different clusters on 50, 000 tweets randomly sampled 1https://developer.twitter.com/ Figure 8: State specific distinct topics from the corpus. We can observe that the density of clusters is much larger within heavily populated urban areas along the East Coast. Figure 7: K-means cluster for spatial aggregation NMF was employed to find 500 topics with α = 0.1 and ρ = 0.5. Only 107 rows of H were found to have at least one nonzero entry. Application of NMF on the 200 aggregated documents identifies some spatially distinct topics covering regions of varying size. Figure 8, shows word clouds for two large state-specific distinct topics. We also found that large metropolitan areas such as New York City, Philadelphia, and Pittsburgh are represented as separate spatially distinct topics. One such example is shown in Figure 9. Almost all the words in this topic are related to New York City airports. In addition to spatial aggregation, we also performed experiments by aggregating data in space and time. In addition to the k = 200 spatial clusters we divided the time interval into 12 days, resulting in a total of 2, 400 spatio-temporally aggregated documents. As expected, this aggregation reveals distinct spatio-temporal topics. We identified several purely temporal topics in this way, including the Halloween topic shown in Figure 10. It is interesting to observe that this topic also contains words related to the season opening 258 Figure 9: NYC airport-specific distinct topic Figure 10: Halloween and CMA temporal topics NBA game between L.A. Lakers and Miami Heat that occurred on the same day. Figure 10 also contains another temporally distinct topic associated with the 2012 Country Music Association (CMA) Award event that happened on the same day. To better illustrate this CMA-related topic, in Table 1 we show several representative tweets. These tweets were randomly selected from tweets containing at least one of the most frequent 10 words in the CMA-related topic. 5.1 Evaluation: Space-Time Scan Statistics Looking at word clouds is a descriptive way to evaluate the quality of discovered topics. In this subsection we will present experimental results attempting to quantitatively evaluate the quality of the discovered topics. To achieve this we use the space-time scan statistics implemented in the SaTScan software (Kulldorff, 2010). We selected the 10 most frequent words in each discovered topic and labeled each tweet from the corpus based on the presence of these words. If a tweet contains any of the 10 words it is assigned to the corresponding topic. We call all tweets assigned to the given topic the positive tweets. If the topic Table 1: Tweets related to CMA awards Anyone know what channel the cma is on? Can’t wait for the cma awards Everyone get prepared for a bunch of cma awards tweets Tomorrow is 46 cma awards so watching that!! carrie underwood is amazing Hunter hayes is perfect Not sure why Taylor Swift is taking over the country charts...her music is more of a mix now between country and pop Luke bryan on the CMAS omg omg !!! is strongly spatial, we would expect the assigned tweets to be strongly spatially clustered. If the topic is strongly spatio-temporal, we would expect the assigned tweets to cluster within a particular spatio-temporal area. The space-time scan statistic is employed to measure enrichment by positive tweets of cylindrical windows covering a circular spatial region and a temporal interval. The cylindrical window is moved in space and time to search for the statistically strongest clusters (Kulldorff, 2010). The cylinder with the strongest enrichment of positive tweets (e.g., based on the ratio between positive tweets and all tweets within the cylinder) is a potential candidate for the significant spatio-temporal cluster. Distributional properties of scan statistics can be used to evaluate the statistical significance of the strongest cylinder (Dwass, 1957). This is done by permuting the labels of tweets multiple times (999 times in this study) and calculating the score of the strongest cylinder in each permutation (Block, 2007). The p-value is then calculated by counting the fraction of the permuted scores larger than the score on the actual data. The p-value reported in this experiment can be thought of as a measure of the spatio-temporal distinctiveness of the identified topic. Characterization of distinct topics using p-value has some limitations. We observed that many distinct topics discovered through aggregation receive p-value equal to zero, making it impossible to identify the strongest distinct topic. For this reason, we used deviation (∆), which measures how many standard deviations apart is the score of the best cylinder observed on the actual data compared to the scores of the best cylinders observed on the permuted data. Table 2: Evaluation of the topic quality using SaTScan Topic General Theme Deviation (∆) Topic Type Power 26504.53 Temporal NYC 25282.17 Spatial NFL 12275.18 Temporal Presidential Debate* 11089.34 Temporal Snow 8624.95 Temporal New Jersey* 8355.10 Spatial Halloween* 7679.58 Temporal Pennsylvania* 6728.94 Spatial NYC Airport* 6424.54 Spatial Weather 2220.64 Temporal In Table 2, we show the strongest topics based on the deviation (∆). In each case, the p-value was 0. For topics labeled with stars in Table 2, 259 Figure 11: Positive (red) and negative (blue) examples and the position of the cluster identified by SaTScan for topic : Power Outage the corresponding word clouds were shown in Figures 1, 8, 9, 10. For the remaining topics, the top ten words are presented in Table 3. It may be noted that New York City, being a very large metropolitan area, has multiple identified topics. One such topic, called NYC airport, was previously presented in Figure 9. Another such topic, called NYC, is presented in Table 2. The spatiotemporal region called Power outage is shown in Figure 11. 10, 000 tweets in this figure are labeled as positive or negative based on the presence or absence of the keywords of this topic. This topic corresponds to multiple power outages in the aftermath of the Sandy Hurricane. Table 3: General theme of topics and related words Topics Words Power power sandy generator trees electricity tree open lights safe hurricane NYC york brooklyn nyc park manhattan city square mta island halloween NFL cowboys steelers romo giants harden church redskins touchdown eagles party Snow snow snowing cold weather delay boone wind blizzard snowed outside Weather barometer humidity temperature mph wind rain blacksburg steady wnw rising † Offensive words are removed 5.2 Comparison between LDA and LSA Previous studies indicate that NMF on Twitter data works better than other available topic modeling techniques (Klinczak and Kaestner, 2015), (Godfrey et al., 2014). This may be attributed to a slightly better robustness of NMF to the short document lengths. This problem is ameliorated in this 0 20 40 60 80 100 Topics 0 5000 10000 15000 20000 25000 30000 35000 Distance from mean (in std) Comparison of NMF, LDA and LSA NMF LSA LDA Figure 12: Comparison of NMF, LDA and LSA study through aggregation. In view of this, it is expected that other topic modeling approaches are also able to identify distinct topics in the aggregated data. To verify this, we tried two other popular algorithms, LSA and LDA.2 LSA is a truncated singular value decomposition technique. LDA is a generative probabilistic model. For LDA and LSA, the number of topics are taken to be 100 to be comparable to the number of topics identified by NMF. Document topic prior and topic word priors in LDA were set to 0.01. We found that LDA and LSA identify distinct topics comparable to NMF when applied to the spatio-temporally aggregated data. Some of the similar topics are selected manually from the LDA and NMF topic lists and shown in Table 4 for comparison. Table 4: Identified LDA topics similar to NMF Topics Words NYC york park brooklyn city pic nyc halloween st th center square street bar Power power sandy hurricane storm safe phone wind stay rain closed open Weather wind mb mph rain humidity cb barometer temp slowly cam midnight falling relative Presidential Debate romney obama debate class mitt president world vote talking week policy † Offensive words are removed It is difficult to draw one-to-one correspondence among all the topics identified by the three methods. We see from Table 4 that some topics are very similar in both NMF and LDA. However, while NMF discovers a topic related to the CMA, LDA and LSA do not. For this reason, instead of comparing the corresponding topics one at a time, the following strategy is applied. The topics in the 2python scikit-learn package is used for all three methods 260 three methods are first sorted based on the deviation (∆) scores and plotted in Figure 12. The most significant topics identified by all three algorithms exhibit similar scores. For the top 20 topics, performance of NMF is only slightly better. The average score of the top 20 topics for NMF is 3,823, while the average scores for LSA and LDA are 3,638 and 3,390 respectively. 5.2.1 Common Topics in LDA and NMF In Section 5.2, we mentioned that topic discovery algorithms such as LDA, NMF, and LSA are capable of finding distinct topics from aggregated documents. When non-aggregated data is used, these algorithms find common topics associated with day to day conversations. In Table 5, words associated with several common topics identified by LDA and NMF on a sample of the non-aggregated tweets are shown. It can be seen that the words in the identified topics do not correspond to a specific space or time. Table 5: Common topics from non-aggregated data LDA NMF cold shot dry blessed smoking wonderful cold weather hot room hungry feet world making sounds coffee running fun sounds making lot times safe games looks talking saw anymore west facebook twitter goodmorning jail facebook instagram guy past means throw start guys girl safe play awesome stay † Offensive words, informal words and internet short form of the words are removed 5.2.2 Influence of Aggregation Strategies and Randomization Our experiments with the simulated data in Section 4.3 revealed that topic discovery is impacted by the aggregation grid density. To see if the behavior transfers to Twitter data, we varied the number of clusters from 100 and 1, 000. As the number of clusters increased, we observed that some of the distinct topics discovered by NMF for k = 200 disappeared when k was increased to 500 or 1, 000. For example, the CMA topic disappeared with those larger numbers of clusters. We also observed relatively small changes in discovered topics for different runs of the clustering for the same value of k. We conclude that clustering used for aggregation has a modest impact on topic discovery. Figure 13: Visualization of temporal trends of topics 5.3 Temporal Trends in Topics SaTScan reports the significant space-time cylinders for each topic. It is possible to categorize those cylinders as spatial or temporal by inspecting the their size. As an alternative, we can use matrix W obtained by NMF to identify temporal clusters. Let W ∗be the matrix which is constructed from W by summing all the rows corresponding to the same time interval. W ∗then represents a purely temporal description of topic distribution. By inspecting the columns of W ∗, shown in Figure 13, we can obtain an additional insight into the nature of temporal topics. We can observe that only a small fraction of the identified topics are strongly temporal in nature. 6 Conclusion In this work, we showed that spatial aggregation of documents leads to discovery of spatially distinct topics. We performed an extensive study on synthetic and real data and demonstrated that spatial and spatio-temporal aggregation indeed leads to discovery of spatial and spatio-temporal distinct topics. To evaluate the quality of the discovered topics we proposed a metric based on space-time scan statistics. Our results show that aggregation is a very powerful and computationally efficient method for discovery of distinct topics. While our study focused on spatial aggregation, aggregation on other types of metadata such as authors, hashtags, or communities is expected to work equally well and discover other types of distinct topics from large collections of documents. 261 References David Alvarez-Melis and Martin Saveski. 2016. Topic modeling in twitter: Aggregating tweets by conversations. ICWSM, 2016:519–522. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Richard Block. 2007. Scanning for clusters in space and time:: A tutorial review of satscan. Social Science Computer Review. Xueqi Cheng, Xiaohui Yan, Yanyan Lan, and Jiafeng Guo. 2014. Btm: Topic modeling over short texts. IEEE Transactions on Knowledge and Data Engineering, 26(12):2928–2941. Minsuk Choi, Sungbok Shin, Jinho Choi, Scott Langevin, Christopher Bethune, Philippe Horne, Nathan Kronenfeld, Ramakrishnan Kannan, Barry Drake, Haesun Park, et al. 2018. Topicontiles: Tilebased spatio-temporal event analytics via exclusive topic modeling on social media. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, page 583. ACM. Meyer Dwass. 1957. Modified randomization tests for nonparametric hypotheses. The Annals of Mathematical Statistics, pages 181–187. Salvatore Giorgi, Daniel Preotiuc-Pietro, Anneke Buffone, Daniel Rieman, Lyle H Ungar, and H Andrew Schwartz. 2018. The remarkable benefit of userlevel aggregation for lexical-based population-level predictions. arXiv preprint arXiv:1808.09600. Daniel Godfrey, Caley Johns, Carl Meyer, Shaina Race, and Carol Sadek. 2014. A case study in text mining: Interpreting twitter data from world cup tweets. arXiv preprint arXiv:1408.5427. Liangjie Hong and Brian D Davison. 2010. Empirical study of topic modeling in twitter. In Proceedings of the first workshop on social media analytics, pages 80–88. ACM. Bo Hu, Mohsen Jamali, and Martin Ester. 2013. Spatio-temporal topic modeling in mobile social media for location recommendation. In Data Mining (ICDM), 2013 IEEE 13th International Conference on, pages 1073–1078. IEEE. Hannah Kim, Jaegul Choo, Jingu Kim, Chandan K Reddy, and Haesun Park. 2015. Simultaneous discovery of common and discriminative topics via joint nonnegative matrix factorization. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 567–576. ACM. Marjori NM Klinczak and Celso AA Kaestner. 2015. A study on topics identification on twitter using clustering algorithms. In Computational Intelligence (LA-CCI), 2015 Latin America Congress on, pages 1–6. IEEE. M Kulldorff. 2010. Satscan user guide for version 9.0. Department of Ambulatory Care and Prevention, Harvard Medical School, Boston, MA. Rishabh Mehrotra, Scott Sanner, Wray Buntine, and Lexing Xie. 2013. Improving lda topic models for microblogs via tweet pooling and automatic labeling. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pages 889–892. ACM. Robertus Nugroho, Weiliang Zhao, Jian Yang, Cecile Paris, and Surya Nepal. 2017. Using time-sensitive interactions to improve topic derivation in twitter. World Wide Web, 20(1):61–87. Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. 2008. Learning to classify short and sparse text & web with hidden topics from largescale data collections. In Proceedings of the 17th international conference on World Wide Web, pages 91–100. ACM. Dear Sungbok Shin, Minsuk Choi, Jinho Choi, Scott Langevin, Christopher Bethune, Philippe Horne, Nathan Kronenfeld, Ramakrishnan Kannan, Barry Drake, Haesun Park, et al. 2017. Stexnmf: Spatiotemporally exclusive topic discovery for anomalous event detection. In Data Mining (ICDM), 2017 IEEE International Conference on, pages 435–444. IEEE. Enrico Steiger, Joao Porto De Albuquerque, and Alexander Zipf. 2015. An advanced systematic literature review on spatiotemporal analyses of twitter data. Transactions in GIS, 19(6):809–834. Asbjørn Steinskog, Jonas Therkelsen, and Bj¨orn Gamb¨ack. 2017. Twitter topic modeling by tweet aggregation. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 77–86. Haoyu Wang, Eduard H Hovy, and Mark Dredze. 2015. The hurricane sandy twitter corpus. In AAAI Workshop: WWW and Public Health Intelligence. Xidao Wen, Yu-Ru Lin, and Konstantinos Pelechrinis. 2016. Pairfac: Event analytics through discriminant tensor factorization. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 519–528. ACM. Jianshu Weng, Ee-Peng Lim, Jing Jiang, and Qi He. 2010. Twitterrank: finding topic-sensitive influential twitterers. In Proceedings of the third ACM international conference on Web search and data mining, pages 261–270. ACM. Xiaohui Yan, Jiafeng Guo, Shenghua Liu, Xue-qi Cheng, and Yanfeng Wang. 2012. Clustering short text using ncut-weighted non-negative matrix factorization. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 2259–2262. ACM. 262 Xiaohui Yan, Jiafeng Guo, Shenghua Liu, Xueqi Cheng, and Yanfeng Wang. 2013. Learning topics in short texts by non-negative matrix factorization on term correlation matrix. In Proceedings of the 2013 SIAM International Conference on Data Mining, pages 749–757. SIAM. Wayne Xin Zhao, Jing Jiang, Jianshu Weng, Jing He, Ee-Peng Lim, Hongfei Yan, and Xiaoming Li. 2011. Comparing twitter and traditional media using topic models. In European Conference on Information Retrieval, pages 338–349. Springer.
2019
25
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2619–2626 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2619 Dataset Creation for Ranking Constructive News Comments Soichiro Fujita,† Hayato Kobayashi,‡ and Manabu Okumura† † Tokyo Institute of Technology ‡ Yahoo Japan Corporation / RIKEN AIP {fujiso@lr.,oku@}pi.titech.ac.jp, [email protected] Abstract Ranking comments on an online news service is a practically important task for the service provider, and thus there have been many studies on this task. However, most of them considered users’ positive feedback, such as “Like”-button clicks, as a quality measure. In this paper, we address directly evaluating the quality of comments on the basis of “constructiveness,” separately from user feedback. To this end, we create a new dataset including 100K+ Japanese comments with constructiveness scores (C-scores). Our experiments clarify (a) C-scores are not always related to users’ positive feedback and (b) the performance of pairwise ranking models tends to be more enhanced by the variation in comments than that in articles. 1 Introduction Users’ comments on an online news service can be regarded as beneficial content (often called usergenerated content1) for service providers because users can obtain supplementary information about news articles through other users’ opinions. Given that comment visibility is a part of the user experience, ranking comments is practically important. For example, Figure 1 shows a page displaying comments on a Japanese news portal, Yahoo! News.2 The page has a list of comments (displayed below articles), and each comment has buttons for user feedback (“Like,” “Dislike,” and “Reply”). There have been many comment ranking studies (Hsu et al., 2009; Das Sarma et al., 2010; Brand and Van Der Merwe, 2014; Wei et al., 2016) with users’ positive feedback for a comment (e.g., “Like”- or “Upvote”-button clicks) serving as the 1https://en.wikipedia.org/wiki/ User-generated_content 2https://news.yahoo.co.jp/ Figure 1: Examples of comments on Yahoo! News. quality measure. However, this type of measurement has two drawbacks: (a) user feedback does not always satisfy the service provider’s needs, such as to create a fair place, and (b) user feedback will be biased by where comments appear in a comment thread. A typical situation for (a) can be seen in political comments, where the “goodness” of the comment will be decided on the basis of the political views of the majority of the users rather than its quality. The situation for (b) can be illustrated by a case where earlier comments tend to receive more feedback since they will be displayed at the top of the page, which implies later comments will be ignored irrespective of their quality. In this paper, we directly evaluate the quality of comments separately from user feedback, focusing on their “constructiveness,” as studied in (Napoles et al., 2017; Kolhatkar and Taboada, 2017). This quality measure is reasonable for services in that displaying constructive comments can stimulate discussion on a news article, which makes the user-generated content richer. We use the definition of constructiveness as in the previous studies, but a clear difference from them is that we address a ranking task, whereas the aforementioned sources addressed classification tasks. In a ranking task, we need to rank comments for each article. That is, when we label 1,000 comments, there are many choices, e.g., 200 articles with 5 2620 comments or 10 articles with 100 comments. We investigate which choice is better for widely used ranking algorithms. Our contributions are as follows. • We create a dataset for ranking constructive comments including 100K+ Japanese comments with constructiveness scores, in collaboration with Yahoo! News. Our dataset will be publicly available.3 • We show empirical evidence that constructiveness scores are not always related to positive user feedback such as “Like”-button clicks. • We investigate how to label comments for ranking and clarify that the performance of pairwise ranking models tends to be more enhanced by the variation in comments than that in articles. 2 Dataset Creation 2.1 Definition for “Constructiveness” According to the dictionary,4 “constructive” means “having or intended to have a useful or beneficial purpose.” Therefore, we expect constructive comments to provide insight and encourage healthy discussion. However, this dictionary definition is a bit too generic for deciding if a comment is constructive. To avoid individual variation as much as possible, we need to prepare a more specific definition before annotation. We follow a previous study (Kolhatkar and Taboada, 2017) on constructiveness, where a questionnaire given to 100 people clarified detailed conditions for constructive comments. We digested it into several simple conditions, shown in Table 1, so that crowdsourced workers could systematically judge comments. Our conditions consist of a precondition for maintaining decency and relevance and four main conditions for representing typical cases of being constructive. Specifically, a constructive comment is defined as one satisfying the precondition and at least one of the main condition in Table 1. 2.2 Crowdsourcing Task Our purpose is to label each comment with a graded numeric score that represents the level of constructiveness for ranking comments. We refer to this score as the constructiveness score 3https://research-lab.yahoo.co.jp/en/ software/ 4https://en.oxforddictionaries.com/ definition/constructive Pre cond. • Related to article and not slander Main cond. • Intent to cause discussions • Objective and supported by fact • New idea, solution, or insight • User’s rare experience Table 1: Conditions for constructive comments. Constructive comment is defined as one satisfying the precondition and at least one of main conditions. #A #C #C/#A Score Shallow 8,000 40,000 5 0 ∼10 Deep 400 40,000 100 0 ∼10 Test 200 42,436 212 0 ∼40 Table 2: Details on created datasets. #A and #C mean numbers of articles and comments in each dataset, respectively. (C-score). We defined the C-score as the number of crowdsourcing workers who judged a comment to be constructive as an answer to a yes-or-no (binary) question because it is more difficult for workers to answer other types of questions such as a numerical selection question (like “How constructive is the comment?”) or a comparison question (like “Which comment is the most constructive?”). This definition realizes a graded numeric score that harnesses the individual variation due to subjective judgements in the conditions, such as “new idea” and “rare experience.” As a consequence, the C-score indicates how many people think that a comment is constructive with the goal of sufficiently satisfying as many users as possible. We used Yahoo! Crowdsourcing5 to label comments. We prepared a task with questions that reference a news article and its comments extracted from Yahoo! News. After the workers read the definition of constructiveness, we asked them to judge whether each comment was constructive (see Appendix A for detailed instructions). To ensure reliability, we extracted only serious workers who correctly answered quality control questions with obvious answers that were randomly included in each task. We used 10 (or 40) workers for each comment for a training (test) dataset. For example, a C-score of 8 means that 8 workers judged a comment as constructive. 5https://crowdsourcing.yahoo.co.jp/ 2621 Comment Score Ex.1) We should build a society where people do not drink and smoke since both can lead to bad health or accidents. 9 Ex.2) If giving freedom, punishment should also be strictly given. 6 Ex.3) They are fools because they smoke, or they smoke because they are fools. 0 Table 3: Examples of comments and scores for article “Lifting the ban on drinking and smoking at 18.” 2.3 Training and Test Datasets We created three datasets: Shallow, Deep, and Test, as shown in Table 2. Shallow and Deep are training datasets made from 8K articles with 5 comments and 400 articles with 100 comments respectively, as extreme cases with the same cost. The comments in each setting were randomly chosen after we extracted news articles with more than 100 comments and were 10 to 125 Japanese characters long. Test is the test dataset we made from 200 articles with an average of 212 comments. We used 40 workers for each comment only for Test to evaluate the ranking results in as much detail as possible, where the setting of 40 was chosen to avoid the top-ranked comments that frequently had the same score. Note that we did not use such a costly setting for training since training data tends to increase over time. None of the datasets overlapped. We calculated an agreement score by using Krippendorff’s alpha (Krippendorff, 2004; Antoine et al., 2014) and by regarding the ranking task as a classification task of whether one comment is more constructive than the other for any pair of two comments, in a similar manner as RankSVM in Section 3. The agreement scores of Shallow and Deep were 0.5282 and 0.5495, respectively, which mean “moderate agreement” (Landis and Koch, 1977). Note that directly applying such an agreement measure is not appropriate for our task since we assume individual variations in workers making graded scores. Table 3 shows examples of scored comments. Ex. (1) has a high score since it includes a constructive opinion with some reasoning. Ex. (2) has a middle score since the judgement, e.g., whether the comment is a new idea, depends on each worker’s background knowledge. Ex. (3) has a low score since it includes offensive content. Figure 2: Frequency distribution of C-scores for comment group selected in descending order of user feedback (Like) and one randomly selected (Random). 2.4 Comparison with User Feedback We investigated the relationship between constructiveness and user feedback by comparing 5K comments randomly extracted in the same way as for Shallow and 5K comments extracted in descending order of user feedback score. The user feedback score of a comment was calculated as the number of “Likes” minus 5 times the number of “Dislikes.” This definition is determined on the basis of the fact that the ratio of “Likes” and “Dislikes” was about 1:5 on average, and in fact, a similar definition is used as a basic sorting feature in this news service. All of the comments in the above two groups were labeled with C-scores in the same way as for Shallow/Deep. Figure 2 shows the frequency distributions of the two groups over C-scores. Surprisingly, both distributions form almost the same shape even though we expected that the comments ordered with the user feedback would have high C-scores. In fact, the correlation coefficient between the user feedback scores and the C-scores was nearly zero, i.e., −0.0036. This means that constructiveness is completely different from user feedback, and using user feedback is not a promising way to show constructive comments in the service. 3 Ranking Constructive News Comments 3.1 Compared Methods We compared the following methods for understanding the characteristics of our datasets. Here, we selected simple SVM-based methods since we can easily interpret the results, although we included the results of neural ranking models in Appendix B. • Like ranks with the user feedback score. • Random ranks randomly. • Length ranks in descending order on the basis 2622 of the comment length. • RankSVM ranks via a rankSVM model (Lee and Lin, 2014) trained to infer relative constructiveness between two comments. Roughly speaking, we solve a binary classification problem of whether or not a comment is more constructive than another one, like SVM. • SVR ranks via a support vector regression model (Vapnik et al., 1997) trained to directly infer the C-score. We used liblinear-ranksvm6 for RankSVM and SVR. The cost parameter was determined from {20, . . . , 2−13} with a validation dataset, where we prepared another 5K comments for each setting for Shallow/Deep. The features for training RankSVM and SVR were made from a comment and the corresponding article. See the next section for the details on preprocessing and the features. 3.2 Preprocessing and Features The preprocessing for training RankSVM and SVR is as follows. We used a morphological analyzer MeCab7 (Kudo et al., 2004), with a neologism dictionary, NEologd8 (Toshinori Sato and Okumura, 2017), for splitting Japanese text into words. We replaced numbers with a special token and standardized letter types, i.e., decapitalization and halfwidth-to-fullwidth.9 We did not remove stop-words because function words would affect the performance in our task, especially for decency. We cut low-frequency words off that appeared only three times or less in each dataset. The dictionary size was about 50,000. The features for a comment (with the corresponding news article) used for RankSVM and SVR are the bag-of-words of the comment, the number of unique words in the comment, the cosine similarity (based on bag-of-words vectors) between the comment and the title, and the bag-ofwords co-occurring in the comment and the title, which are distinguished from the normal bag-ofwords. Note that we used only titles for features to avoid extra labeling and training costs for lengthy article bodies, assuming that a title can be regarded as a summary of the corresponding article. 6https://github.com/FurongPeng/ liblinear-ranksvm 7http://taku910.github.io/mecab/ 8https://github.com/neologd/ mecab-ipadic-neologd 9https://en.wikipedia.org/wiki/ Halfwidth_and_fullwidth_forms 3.3 Evaluation We used normalized discounted cumulative gain (NDCG) (Burges et al., 2005a) as our primary evaluation measure, which is widely used for evaluating ranking models in information retrieval tasks. The NDCG is typically calculated for the top-k comments ranked by a ranking model and denoted by NDCG@k = Zk Pk i=1 ri log2 (i+1), where ri represents the true C-score of the i-th ranked comment, and Zk is a normalization constant to scale the value between 0 and 1. This equation means that the value becomes higher (better) as the inferred ranking becomes closer to the correct ranking, especially for top ranked comments. In addition, we used precision@k as our secondary evaluation measure, which is defined as the ratio of correctly included comments in the inferred top-k comments with respect to the true top-k comments. Note that a well-known paper (J¨arvelin and Kek¨al¨ainen, 2002) in the information retrieval field determined NDCG to be more appropriate than precision for graded scores like our setting. 3.4 Results Table 4 shows the results of NDCG@k and precision@k (for k ∈{1, 5, 10}) for Test for the compared models, where RankSVM and SVR have two variations trained with Shallow and Deep. Random was averaged over 10 trials. Note that all values are represented as percentages. The results of Like and Random show that neither of them performed well, which is consistent with our finding that Like has a similar tendency to Random, as described in Section 2. However, Length performed better than Like and Random. This implies that long comments tend to be constructive, but of course, the length of comments is not enough to accurately infer the C-score, compared with RankSVM. Among all variations of RankSVM and SVR, RankSVM with Deep consistently performed the best for our primary evaluation measure NDCG. The differences between NDCGs of RankSVM with Deep and SVR with Shallow were statistically significant in a paired t-test (p < 0.05). As for precision, it was beaten by SVR with Shallow for @1 and @5. This means that RankSVM sometimes failed to find the best solutions (the most constructive comment) but obtained better solutions (fairly constructive ones). 2623 Dataset NDCG@1 NDCG@5 NDCG@10 Prec@1 Prec@5 Prec@10 Like 29.93 31.84 34.99 2.00 6.20 8.70 Random 25.85 27.90 29.06 1.10 4.60 6.50 Length 60.28 64.93 67.72 6.00 20.80 30.04 RankSVM Shallow 72.24 74.63 76.79 14.50 29.40 41.24 RankSVM Deep 74.15 76.44 78.25 13.00 31.60 42.20 SVR Shallow 73.87 75.48 76.97 16.50 32.70 41.00 SVR Deep 69.68 71.99 74.26 11.00 27.20 36.35 Table 4: Results (%) of NDCG@k and precision@k for task of ranking constructive comments. Comparing Shallow and Deep for RankSVM, we can see that RankSVM performed better with Deep than with Shallow because the number of training examples for pairwise ranking models was 2-combinations from n, i.e., n 2  = n(n−1) 2 , given n comments. This means that the number of pairwise examples increases in O(n2). Conversely, SVR performed well with Shallow. Features based on articles can be useful for directly inferring the C-scores without comparing comments in such cases. Similar findings were observed in the results of neural ranking models (see Appendix B), but we omitted them because of space limitations. 4 Related Work Analyzing comments on online news services or discussion forums has been extensively studied (Wanas et al., 2008; Ma et al., 2012; Brand and Van Der Merwe, 2014; Llewellyn et al., 2016; Shi and Lam, 2018). In this line of research, there have been many studies on ranking comments (Hsu et al., 2009; Das Sarma et al., 2010; Brand and Van Der Merwe, 2014; Wei et al., 2016). However, their approaches were based on user feedback, which is completely different from constructiveness, as explained in Section 2. Constructiveness has sometimes been introduced in argument analysis frameworks. Napoles et al. (2017) created a dataset for argument analysis on the basis of reply threads, each of which has a label as a constructiveness flag and consists of child comments replying to the parent comment. Kolhatkar and Taboada (2017) proposed a classification model that determines constructiveness for a comment by regarding all comments in a constructive thread as constructive and evaluated it with a dataset of 1K manually annotated comments, which is much smaller than our datasets. Our task is a ranking task based on graded numeric scores and different from their task. If training a regression model with binary labels, the results will be similar to SVR. There are mainly two approaches to analyzing the quality of comments on the basis of their content without using constructiveness. One is hate speech detection (Kwok and Wang, 2013; Nobata et al., 2016; Davidson et al., 2017) and the other is sentiment analysis (Fan and Sun, 2010; Siersdorfer et al., 2014). Although these approaches are useful for other tasks, they do not directly solve our task, i.e., ranking constructive comments. For example, the simple comment “Great!” is positive and is not hate speech, but it is not suitable as a top-ranked comment in our task. Learning-to-rank methods are often used for information retrieval tasks (Liu, 2009). There are several datasets for ranking documents on search engines, such as Microsoft LETOR (Qin et al., 2010; Qin and Liu, 2013) and Yahoo! LTRC (Chapelle and Chang, 2011). Because it is not feasible to label all documents for each query, “possibly” relevant documents are typically sampled by using a simple ranking algorithm such as BM25 (Robertson and Zaragoza, 2009). However, we cannot use such a strategy since comments are basically relevant to an article, and there are many relevant but non-constructive comments. 5 Conclusion We created a new labeled dataset for ranking constructive comments. Experimental results suggested that pairwise ranking models work well with the variation of comments rather than articles. Our future work will include efficiently labeling promising comments via active learning. Acknowledgements We would like to thank anonymous reviewers for their constructive comments. 2624 References Jean-Yves Antoine, Jeanne Villaneau, and Ana¨ıs Lefeuvre. 2014. Weighted Krippendorff’s alpha is a more reliable metrics for multi-coders ordinal annotations: experimental studies on emotion, opinion and coreference annotation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), pages 550–559. Association for Computational Linguistics. Dirk Brand and Brink Van Der Merwe. 2014. Comment Classification for an Online News Domain. In Proceedings of the First International Conference on the Use of Mobile Informations and Communication Technology in Africa, pages 50–55. Stellenbosch University. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005a. Learning to Rank Using Gradient Descent. In Proceedings of the 22nd International Conference on Machine Learning (ICML 2005), pages 89–96. ACM. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005b. Learning to Rank Using Gradient Descent. In Proceedings of the 22nd International Conference on Machine Learning (ICML 2005), pages 89–96. ACM. Olivier Chapelle and Yi Chang. 2011. Yahoo! Learning to Rank Challenge Overview. In Proceedings of the Learning to Rank Challenge, pages 1–24. PMLR. Anish Das Sarma, Atish Das Sarma, Sreenivas Gollapudi, and Rina Panigrahy. 2010. Ranking Mechanisms in Twitter-like Forums. In Proceedings of the Third ACM International Conference on Web Search and Data Mining (WSDM 2010), pages 21– 30. ACM. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In Proceedings of the Eleventh International AAAI Conference on Web and Social Media (ICWSM 2017), pages 512–515. AAAI Press. Wen Fan and Shutao Sun. 2010. Sentiment classification for online comments on Chinese news. In Proceedings of the 2010 International Conference on Computer Application and System Modeling (ICCASM 2010), volume 4, pages V4–740–V4–745. IEEE. Chiao-Fang Hsu, Elham Khabiri, and James Caverlee. 2009. Ranking Comments on the Social Web. In Proceedings of the 2009 International Conference on Computational Science and Engineering (CSE 2009), volume 4, pages 90–97. IEEE. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated Gain-based Evaluation of IR Techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. Varada Kolhatkar and Maite Taboada. 2017. Constructive Language in News Comments. In Proceedings of the First Workshop on Abusive Language Online, pages 11–17. Association for Computational Linguistics. Klaus Krippendorff. 2004. Content Analysis: An Introduction to Its Methodology (second edition). Sage Publications. Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying Conditional Random Fields to Japanese Morphological Analysis. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP 2004), pages 230–237. Association for Computational Linguistics. Irene Kwok and Yuzhou Wang. 2013. Locate the Hate: Detecting Tweets Against Blacks. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2013), pages 1621–1622. AAAI Press. J. Richard Landis and Gary G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. Biometrics, 33(1):159–174. Ching-Pei Lee and Chih-Jen Lin. 2014. Large-scale Linear RankSVM. Neural Computation, 26(4):781– 817. Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Foundations and Trends in Information Retrieval, 3(3):225–331. Clare Llewellyn, Claire Grover, and Jon Oberlander. 2016. Improving Topic Model Clustering of Newspaper Comments for Summarisation. In Proceedings of the ACL 2016 Student Research Workshop, pages 43–50. Association for Computational Linguistics. Zongyang Ma, Aixin Sun, Quan Yuan, and Gao Cong. 2012. Topic-driven Reader Comments Summarization. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management (CIKM 2012), pages 265–274. ACM. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 3111–3119. Curran Associates, Inc. Courtney Napoles, Aasish Pappu, and Joel R Tetreault. 2017. Automatically Identifying Good Conversations Online (Yes, They Do Exist!). In Proceedings of the Eleventh International AAAI Conference on Web and Social Media (ICWSM 2017), pages 628– 631. AAAI Press. 2625 Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive Language Detection in Online User Content. In Proceedings of the 25th International Conference on World Wide Web (WWW 2016), pages 145–153. International World Wide Web Conferences Steering Committee. Tao Qin and Tie-Yan Liu. 2013. Introducing LETOR 4.0 Datasets. CoRR, abs/1306.2597. Tao Qin, Tie-Yan Liu, Jun Xu, and Hang Li. 2010. LETOR: A Benchmark Collection for Research on Learning to Rank for Information Retrieval. Information Retrieval, 13(4):346–374. Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends in Information Retrieval, 3(4):333–389. Bei Shi and Wai Lam. 2018. Reader Comment Digest Through Latent Event Facets and News Specificity. IEEE Transactions on Knowledge and Data Engineering. Stefan Siersdorfer, Sergiu Chelaru, Jose San Pedro, Ismail Sengor Altingovde, and Wolfgang Nejdl. 2014. Analyzing and Mining Comments and Comment Ratings on the Social Web. ACM Transactions on the Web (TWEB), 8(3):17:1–17:39. Taiichi Hashimoto Toshinori Sato and Manabu Okumura. 2017. Implementation of a word segmentation dictionary called mecab-ipadic-neologd and study on how to use it effectively for information retrieval (in japanese). In Proceedings of the Twentythree Annual Meeting of the Association for Natural Language Processing, pages NLP2017–B6–1. The Association for Natural Language Processing. Vladimir Vapnik, Steven E. Golowich, and Alex J. Smola. 1997. Support Vector Method for Function Approximation, Regression Estimation and Signal Processing. In Advances in Neural Information Processing Systems 9 (NIPS 1997), pages 281–287. MIT Press. Nayer Wanas, Motaz El-Saban, Heba Ashour, and Waleed Ammar. 2008. Automatic Scoring of Online Discussion Posts. In Proceedings of the Second ACM Workshop on Information Credibility on the Web, pages 19–26. ACM. Zhongyu Wei, Yang Liu, and Yi Li. 2016. Is This Post Persuasive? Ranking Argumentative Comments in Online Forum. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 195–200. Association for Computational Linguistics. A Details on Instructions for Crowdsourced Workers Detailed instructions (translated in English) on our crowdsourcing task are as follows. We included five comments of the same article in each task to reduce workers’ annotation cost. Instruction: Given five comments for an article, please select all comments that satisfy the following precondition and at least one main condition. - Pre-condition: The comment is related to the article and is not an unpleasant one, including slander. - Main-condition 1: The comment intends to cause discussions on the basis of the author’s opinion. - Main-condition 2: The comment is objective and supported by fact or reason. - Main-condition 3: The comment gives a new idea, solution, or insight. - Main-condition 4: The comment is a user’s rare experience related to the article. B Results of Neural Models We confirmed that the results of neural models have a similar tendency to those of SVM-based models, although we omitted these results due to space limitations. We compared a neural pairwise ranking model, RankNet, and a neural regression model, LSTMReg, as follows. • RankNet ranks via a neural pairwise ranking model, RankNet (Burges et al., 2005b). The key concept of this model is similar to that of RankSVM, i.e., solving the ranking problem as a classification problem of whether a comment is more constructive than another one. Specifically, the model is constructed to predict the ranking score of a comment and trained so that, given two comments, the magnitude relation of the predicted scores corresponds to that of the true constructiveness scores, via cross entropy loss. • LSTMReg ranks via an LSTM-based regression model. The basic structure is the same as RankNet, but the training is performed so that, given a comment, the predicted score corresponds to the true constructiveness score, via mean squared error loss. The experimental settings were as follows. The preprocessing was the same as in RankSVM, except that cutoff tokens were replaced with a special token “<unk>”. We used 300 dimensional embeddings of a skip-gram model (Mikolov 2626 Dataset NDCG@1 NDCG@5 NDCG@10 Prec@1 Prec@5 Prec@10 RankNet Shallow 73.42 73.91 75.11 13.67 27.40 37.81 RankNet Deep 75.19 77.17 78.62 13.17 31.72 41.68 LSTMReg Shallow 71.71 73.96 75.74 12.68 28.48 38.99 LSTMReg Deep 69.40 72.51 74.21 10.55 26.75 36.28 Table 5: Results (%) of NDCG@k and precision@k for task of ranking constructive comments for RankNet and LSTMReg. et al., 2013) trained with 1.5 million unlabeled news comments by using an open source software, gensim,10 with the default parameters. Both RankNet and LSTMReg had the same structure, i.e., an encoder-scorer. The encoder consisted of two LSTMs with 300 units to separately encode a comment and its title, and the scorer predicted the ranking score of the comment via a full-connected layer after concatenating the two encoded (comment and title) vectors. We used the Adam optimizer (α = 0.0001, β1 = 0.9, β2 = 0.999, ϵ = 1 × 10−8) to train these models. The batch size was 10 (pairs sampled from each article when training RankNet), and the number of iterations of batches was 10,000. The formal definition of the loss function of RankNet is the same as in the original paper. Given two comments c1 and c2, we define the probability of c1 being more constructive than c2 as p = σ(f(c1) −f(c2)), where σ(·) is a sigmoid function, and f(c) is the predicted score of c. The cross entropy loss is calculated as −p log p −(1 −p) log(1 −p), where p is 1 if the true constructive score of c1 is higher than that of c2, 0 if lower, and 0.5 if otherwise. Figure 5 shows the results of RankNet and LSTMReg. Looking at our primary measure NDCG, we can see that RankNet with Deep clearly performed the best. Furthermore, comparing the results with Shallow and Deep, RankNet with Deep performed better than RankNet with Shallow, while LSTMReg with Shallow performed better than LSTMReg with Deep. These findings are consistent with the results of SVM-based models. 10https://radimrehurek.com/gensim/
2019
250
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2627–2632 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2627 Enhancing Air Quality Prediction with Social Media and Natural Language Processing Jyun-Yu Jiang†, Xue Sun†, Wei Wang† and Sean Young‡. †Department of Computer Science, University of California, Los Angeles, CA, USA ‡Department of Family Medicine, University of California, Los Angeles, CA, USA {jyunyu, cynosure, weiwang}@cs.ucla.edu, [email protected] Abstract Accompanied by modern industrial developments, air pollution has already become a major concern for human health. Hence, air quality measures, such as the concentration of PM2.5, have attracted increasing attention. Even some studies apply historical measurements into air quality forecast, the changes of air quality conditions are still hard to monitor. In this paper, we propose to exploit social media and natural language processing techniques to enhance air quality prediction. Social media users are treated as social sensors with their findings and locations. After filtering noisy tweets using word selection and topic modeling, a deep learning model based on convolutional neural networks and overtweet-pooling is proposed to enhance air quality prediction. We conduct experiments on 7month real-world Twitter datasets in the five most heavily polluted states in the USA. The results show that our approach significantly improves air quality prediction over the baseline that does not use social media by 6.9% to 17.7% in macro-F1 scores. 1 Introduction In recent centuries, industrialization has considerably changed human society by providing a stimulus to economic growth and improved life quality. However, the advancement is accompanied by the increase in air pollutant emissions and risks to public health. As a consequence, predicting real-time air quality information (AQI), such as the concentration of PM2.5, has attracted more and more attention. Air quality prediction may help the government and society to better protect their citizens from potentially harmful effects of poor air quality. To forecast AQI, one of the most conventional approaches is to exploit historical air quality and treat the task as a time series prediction problem (Genc et al., 2010; Zheng et al., 2015). However, the air quality information can be too sophisticated to be predicted by only past AQI without any additional knowledge. For example, other environmental factors like humidity and temperature can affect the air quality when real-world events like wildfires may also play a role. To learn the additional information, most of the relevant studies collect data from additional sensors like images (Jiang et al., 2011) and ground sensors (Zheng et al., 2015). Nevertheless, these sensors are expensive in not only installation but also maintenance. As a result, exploiting sensors for air quality prediction may be too costly for most of the cities. To learn additional knowledge without physical sensors, one of the most effective approaches is to leverage the wisdom of the crowd on the internet. For example, 81% of the adults in the USA spend on average two hours on social media and collectively publish 170 million tweets1 every day on their feelings and observations (Wu et al., 2018). In other words, social media users can be considered as “social sensors” to perceive environmental changes and real-world events. Although social sensing has been applied to detect or predict several real-world events, such as influenza surveillance (Santillana et al., 2015; Dredze, 2012; Achrekar et al., 2011) and earthquakes (Sakaki et al., 2010, 2013), none of them focuses on predicting the air quality information. Note that although Jiang et al. (2015) and Wang et al. (2017) exploit social media to infer AQIs at current or past time, they cannot predict the future air quality. Moreover, the AQIs in these previous studies usually have considerable fluctuations, under which circumstance users tend to publish related posts, 1For simplicity, the posts published on social media are called tweets in this paper. 2628 which makes the inference task much more manageable than general cases. In general cases, air quality changes gradually most time, which may be not sufficiently documented in social media. For instance, in California, more than 80% of the changes in air quality conditions are between good and moderate. In this paper, we aim to leverage social media for air quality prediction. Our approach consists of three stages, including (1) tweet filtering, (2) feature extraction, and (3) air quality prediction. In the first stage, all of the incoming tweets are filtered by geographical locations and keywords extracted from statistical and topical modeling. After filtering the tweets, a convolutional neural network is applied to extract the individual feature vector for each tweet with a max-over-time pooling layer. A max-over-tweet layer is then proposed to aggregate the feature vectors of all tweets as the social media features for predicting air quality using a fully-connected hidden layer to combine with historical measurements. Finally, experiments conducted on 7-month large-scale Twitter datasets show that our approach significantly outperforms all comparative baselines. 2 Air Quality Prediction with Social Media and NLP Following the previous studies (Zheng et al., 2015), we model the problem as a multi-class classification task. According to the Environmental Protection Agency 2 (EPA) in USA, AQIs can be categorized into six classes as shown in Figure 1. Note that more than 99% of daily AQIs in the USA are similar and falling in the first two classes so that the classification task is more laborious than predicting numerical AQIs. Given a location l and a time t, the corpus D(l, t) is defined as the N tweets published by any user located at the location l at time t. a(l, t) denotes the AQI value in the location l at time t while the historical measurements H(l, t) = a(l, t), a(l, t −1), · · · , a(l, t −T + 1) provide AQIs at T time points. Given the corpus D(l, t) and the historical measurements H(l, t) at location l at time t, our goal is to predict the corresponding class y of the AQI at the next time point t + 1. Framework Overview. Figure 1 illustrates the proposed three-stage framework. In the first stage, 2EPA: https://www.epa.gov/ AQI Level of Concern 0-50 Good 51-100 Moderate 101-150 Unhealthy for Sensitive Groups 151-200 Unhealthy 201-300 Very Unhealthy 301-500 Hazardous Table 1: Categorization of AQI from EPA. the incoming tweets are filtered to remove irrelevant information. In the second stage, representative features are extracted from filtered tweets and historical measurements. In the last stage, we predict the category of air quality with a hidden layer and a softmax function. 2.1 Stage 1: Tweet Filtering In most of the cities, the majority of tweets should be irrelevant to air quality because users are less likely to discuss air quality situations unless there is a dramatic change. Hence, we need to filter tweets before using them for air quality prediction. Following the previous work (Shike Mei and R.Dyer, 2014), we use three groups of keywords for filtering tweets, including (1) environmentrelated terms like smog released by EPA, (2) health-related terms like choke provided by the National Library of Medicine3, and (3) significant terms including the most significant 128 words correlated to high AQIs in χ2 statistics (Sch¨utze et al., 2008). The incoming tweets are filtered by the aforementioned keywords in the three groups. The tweets containing at least one of these keywords are likely to be relevant to the topics about air quality. We denote the corpus of relevant tweets as D′(l, t). The features extracted from relevant tweets are expected to be more robust. 2.2 Stage 2: Feature Extraction To extract features from text data, the effectiveness of convolutional neural networks (CNNs) has been demonstrated in many studies (Kim, 2014). In this paper, CNNs with max-over-time pooling are applied to derive the representation for every tweet. We then propose max-over-tweet pooling to aggregate tweet representations across all relevant tweets as the corpus representation. Finally, the features can be acquired by concatenating the 3https://www.nlm.nih.gov/medical-terms.html 2629 Tweet Filtering Feature Extraction Air Quality Prediction ... Convolutional Layer Max-over-time Pooling ... · · · · · · · · · · · · · · · Embedding Layer ... ... ... Unfiltered Tweet Stream ... ... ... ... ... Max-over-tweet Pooling ... Historical Measurements Hidden Layer W1 H(l, t) m1 mall cj 1 WN cj N mN D(l, t) D0(l, t) Relevant Tweets p(l, t) ˆy(l, t) Figure 1: The framework of the proposed approach. corpus representation and the historical measurements for prediction. Tweet Representation. A tweet wi can be represented by a matrix Wi ∈Rd×|wi|, where d is the dimension of word embeddings; and |wi| is the number of words in the tweet. As shown in Figure 1, a CNN with d × k kernels extracts the n-gram semantics of k contiguous words. Note that the row dimension of kernels is identical to the word embedding dimension to jointly consider the overall embedding vector. The convolution with the j-th kernel produces a numerical vector cj i, which is then aggregated by max-over-time pooling (Collobert et al., 2011; Kim, 2014). As a result, the representation of a tweet mi can be derived by chaining the pooled results of all kernels. Corpus Representation. Since relevant tweets in the corpus can be myriad and not fixed, we need to aggregate various representations into an ultimate representation for the whole corpus. Here we propose max-over-tweet pooling to derive the corpus representation. The layer of max-over-tweet pooling reads all tweet representations and aggregates them by deriving the maximum value for each representation dimension. More precisely, a dimension of the representation can be treated as the sensor about a particular topic while the max-overtweet pooling layer attempts to find the maximum sensor value among the sensor values of all relevant tweets. Finally, the max-over-tweet pooling layer can derive the corpus representation mall by considering all tweet representations. After determining the corpus representation mall, the final features x(l, t) for air quality prediction can be constructed by concatenating mall and the historical measurements H(l, t). As a consequence, the final features incorporate the knowledge of existing observations and the crowd power on social media. 2.3 Stage 3: Air Quality Prediction To address the air quality prediction, we apply a fully-connected hidden layer to estimate the logits of all classes. More precisely, the logits z(l, t) can be computed as z(l, t) = F(x(l, t)), where F(·) is a fully-connected hidden layer with L hidden units; the dimension of z(l, t) is identical to the number of classes in air quality categorization. Then the probabilistic score for each class can be obtained with a softmax function (Goodfellow et al., 2016) when the prediction can be finally determined as the class with the highest score. Finally, the whole system can be computed and trained in an end-to-end manner and optimized by the cross-entropy loss (Goodfellow et al., 2016). 3 Experiments 3.1 Experimental Settings. Data Collection. For social media data, we exploit the Twitter developer API4 to crawl 1% of general English tweets published in the USA with location tags from November 17, 2015, to June 12, 2016. Each of the crawled tweets is associated with the corresponding county and state. EPA releases daily AQIs for every county in the USA publicly, which serve as the historical measurements and the gold standard. Experimental Datasets. We conduct experiments to predict daily air quality conditions for locations fine-grained to the county level. More specifically, each of the samples can be represented by a tuple (l, t), where l is a county in the USA; t is a date 4https://developer.twitter.com/en/ docs.html 2630 Dataset CA ID IN IL OH Overall tweets 85.3M 1.2M 9.2M 23.2M 31.7M Relevant tweets 11.8M 0.07M 0.5M 1.0M 1.4M Training tuples 7,435 1,175 2,990 1,804 3,647 Validation tuples 1,487 235 598 361 729 Testing tuples 1,483 235 599 361 730 Table 2: Statistics of five experimental datasets. The relevant tweets refer to the remaining tweets after the stage of tweet filtering. with crawled tweets. For each tuple, the historical measures are the AQIs in the previous seven days as seven numerical features. Five experimental datasets are then constructed with the data of the five most polluted states according to the annual report from America Health Ranking5, including California (CA), Idaho (ID), Illinois (IL), Indiana (IN), and Ohio (OH). The overall datasets are further partitioned by time into a 30-week training dataset, two 5-week datasets for validation and testing. As a result, Table 2 shows the statistics of five experimental datasets. Note that more than 90% tweets are filtered as irrelevant tweets in the stage of tweet filtering. It also shows the necessity of filtering irrelevant tweets that can probably be noises for air quality prediction. Implementation Details Our approach is implemented by Tensorflow (Abadi et al., 2016) and trained by the Adam optimizer (Kingma and Ba, 2014) with an initial learning rate 10−3. After parameter tuning, λ is set to 10−3 while the number of hidden units in the hidden layer L is 128. The dimension of the word embeddings is 300. All of the activation functions in the model are set to exponential linear units (ELUs) (Clevert et al., 2015). For CNNS, 96 kernels with different sizes from 2 to 4 are applied to obtain a 96-dimensional representation for each relevant tweet in the corpus. Baseline Methods. Because we are the first study using social media to predict air quality situation, there are much few available methods. Even though some studies (Jiang et al., 2015) claim the capability of inferring ongoing AQIs with social media, they apply strong restrictions to derive features for highly polluted cities so that they are incapable of tackling most of the cases in our experiments. In the experiments, we compare with two baseline methods as follows: (1) Prediction with only AQIs (PAQI): To under5https://www.americashealthrankings.org stand the base performance, PAQI predicts the air quality conditions with only historical measurements. The knowledge of social media is ignored for this baseline method. (2) Bag-of-words Features (BOW): To demonstrate the effectiveness of extracted features, we replace the extracted features with conventional bag-of-words features as a baseline method. Note that all baselines apply a neural network with a hidden layer for prediction. 3.2 Experimental Results For evaluation, micro- and macro-F1 scores are selected the evaluation metrics. Table 3 demonstrates the performance of the three methods. Micro-F1 scores are generally better than macroF1 scores because the trivial cases like the class of good air quality are the majority of datasets with higher weights in micro-F1 scores. PAQI is better than BOW although BOW uses the knowledge of social media. It is because BOW features involve all irrelevant words so that the actual essential knowledge cannot be recognized. Our approach significantly outperforms all baseline methods in almost all metrics. More precisely, our approach improves the air quality prediction over PAQI from 6.92% to 17.71% in macro-F1 scores. The results demonstrate that social media and NLP can benefit air quality prediction. In addition to the unbalanced datasets based on the categorization of EPA, we also conduct the experiments with relatively balanced datasets to show the robustness of our proposed approach. More specifically, the categorization is refined to four classes with finer windows of AQIs, including: [0, 25), [25, 50), [50, 75), and [75, ∞). Figures 2 and 3 illustrate the Micro- and Macro-F1 scores of PAQI and our approach in the refined datasets. The experimental results show that the improvements are consistent with the experiments in unbalanced datasets of extreme air quality prediction. It also demonstrates the robustness of our proposed approach. 4 Conclusions and Discussions In this paper, we propose a novel framework for leveraging social media and NLP to air quality prediction. After filtering irrelevant tweets, a CNN derives a feature vector for each tweet with max-over-time pooling. We also propose the novel max-over-tweet pooling to aggregate the feature vectors of all tweets over numerous hid2631 Dataset Method Micro Average Macro Average Prec. Rec. F1 Prec Rec. F1 BOW 0.807 0.829 0.809 0.687 0.619 0.631 ID PAQI 0.816 0.728 0.757 0.611 0.677 0.617 Ours 0.863 0.811 0.828 0.691 0.776 0.714 BOW 0.792 0.786 0.786 0.508 0.508 0.501 IN PAQI 0.847 0.682 0.737 0.567 0.649 0.548 Ours 0.855 0.849 0.852 0.640 0.652 0.645 BOW 0.775 0.802 0.791 0.506 0.499 0.484 IL PAQI 0.834 0.686 0.737 0.580 0.666 0.566 Ours 0.844 0.847 0.845 0.646 0.638 0.640 BOW 0.744 0.780 0.760 0.515 0.512 0.510 OH PAQI 0.800 0.683 0.724 0.569 0.622 0.562 Ours 0.813 0.813 0.815 0.629 0.627 0.627 BOW 0.647 0.683 0.660 0.495 0.488 0.485 CA PAQI 0.826 0.725 0.745 0.700 0.772 0.694 Ours 0.830 0.786 0.798 0.728 0.786 0.742 Table 3: The overall classification performance of the baseline methods and our approach. All of the improvements of our approach (ours) over PAQI are significant with a paired t-test at a 99% significance level. ID IN IL OH CA 0.3 0.4 0.5 0.6 Micro-F1 PAQI Our Approach Figure 2: Micro F1 scores with four-class categorization. All of the improvements of our approach over the baseline method are significant with a paired t-test at a 99% significance level. den topics. Finally, the corpus representation can be taken into account to predict air quality with historical measurements. The results of extensive experiments show that our proposed approach significantly outperforms two comparative baseline methods across both balanced and unbalanced datasets for different locations in the USA. This is because: (1) Most noisy and irrelevant tweets are effectively filtered in the stage of tweet filtering; (2) The convolutional neural network and the proposed max-over-tweets are able to extract essential knowledge about air quality prediction from myriad tweets in social media; (3) There are some ID IN IL OH CA 0.3 0.4 0.5 0.6 Macro-F1 PAQI Our Approach Figure 3: Macro F1 scores with four-class categorization. All of the improvements of our approach over the baseline method are significant with a paired t-test at a 99% significance level. limitations on only using historical measurements, such as the capability of recognizing real-world events. Acknowledgement We would like to thank the anonymous reviewers for their helpful comments. The work was partially supported by NIH U01 HG008488, R01 A132030, and NSF DGE-1829071. 2632 References Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: a system for large-scale machine learning. In OSDI, volume 16, pages 265– 283. Harshavardhan Achrekar, Avinash Gandhe, Ross Lazarus, Ssu-Hsin Yu, and Benyuan Liu. 2011. Predicting flu trends using twitter data. In Computer Communications Workshops (INFOCOM WKSHPS), 2011 IEEE Conference on, pages 702–707. IEEE. Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Mark Dredze. 2012. How social media will change public health. IEEE Intelligent Systems, 27(4):81– 84. D Deniz Genc, Canan Yesilyurt, and Gurdal Tuncel. 2010. Air pollution forecasting in ankara, turkey using air pollution index and its relation to assimilative capacity of the atmosphere. Environmental monitoring and assessment, 166(1-4):11–27. Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning, volume 1. MIT press Cambridge. Wei Jiang, Yandong Wang, Ming-Hsiang Tsou, and Xiaokang Fu. 2015. Using social media to detect outdoor air pollution and monitor air quality index (aqi): a geo-targeted spatiotemporal analysis framework with sina weibo (chinese twitter). PloS one, 10(10):e0141185. Yifei Jiang, Kun Li, Lei Tian, Ricardo Piedrahita, Xiang Yun, Omkar Mansata, Qin Lv, Robert P Dick, Michael Hannigan, and Li Shang. 2011. Maqs: a personalized mobile sensing system for indoor air quality monitoring. In Proceedings of the 13th international conference on Ubiquitous computing, pages 271–280. ACM. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World wide web, pages 851–860. ACM. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2013. Tweet analysis for real-time event detection and earthquake reporting system development. IEEE Transactions on Knowledge and Data Engineering, 25(4):919–931. Mauricio Santillana, Andr´e T Nguyen, Mark Dredze, Michael J Paul, Elaine O Nsoesie, and John S Brownstein. 2015. Combining search, social media, and traditional data sources to improve influenza surveillance. PLoS computational biology, 11(10):e1004513. Hinrich Sch¨utze, Christopher D Manning, and Prabhakar Raghavan. 2008. Introduction to information retrieval, volume 39. Cambridge University Press. Jing Fan Xiaojin Zhu Shike Mei, Han Li and Charles R.Dyer. 2014. Inferring air pollution by sniffing social media. In 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pages 534–539. ACM. Yan-dong Wang, Xiao-kang Fu, Wei Jiang, Teng Wang, Ming-Hsiang Tsou, and Xin-yue Ye. 2017. Inferring urban air quality based on social media. Computers, Environment and Urban Systems, 66:110–116. Tailai Wu, Zhaohua Deng, Zhanchun Feng, Darrell J Gaskin, Donglan Zhang, and Ruoxi Wang. 2018. The effect of doctor-consumer interaction on social media on consumers health behaviors: Crosssectional study. Journal of medical Internet research, 20(2). Yu Zheng, Xiuwen Yi, Ming Li, Ruiyuan Li, Zhangqing Shan, Eric Chang, and Tianrui Li. 2015. Forecasting fine-grained air quality based on big data. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 2267–2276. ACM.
2019
251
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2633–2638 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2633 Twitter Homophily: Network Based Prediction of User’s Occupation Jiaqi Pan∗ University of Electronic Science and Technology of China [email protected] Rishabh Bhardwaj∗ National University of Singapore rishabhbhardwaj [email protected] Wei Lu Singapore University of Technology and Design [email protected] Hai Leong Chieu DSO National Laboratories [email protected] Xinghao Pan DSO National Laboratories [email protected] Ni Yi Puay DSO National Laboratories [email protected] Abstract In this paper, we investigate the importance of social network information compared to content information in the prediction of a Twitter user’s occupational class. We show that the content information of a user’s tweets, the profile descriptions of a user’s follower/following community, and the user’s social network provide useful information for classifying a user’s occupational group. In our study, we extend an existing dataset for this problem, and we achieve significantly better performance by using social network homophily that has not been fully exploited in previous work. In our analysis, we found that by using the graph convolutional network to exploit social homophily, we can achieve competitive performance on this dataset with just a small fraction of the training data. 1 Introduction Twitter (http://twitter.com) is a microblogging service launched in 2006, where, a user can publish messages with up to 280 characters, called “tweets”. Unlike many other social networking platforms, such as Facebook and LinkedIn, Twitter does not provide structured fields for users to fill in personal information. However, a user can write a 160-character-long small public summary about itself called a “Bio”. Besides linguistic information from tweets and Bios, online social media is a rich source of network information. People’s personal networks are homogeneous, i.e., friends share more attributes such as race, ethnicity, religion, and occupation–known as the homophily principle (McPherson et al., 2001). Such network information has been utilized in friend recommendation (Guy et al., 2010), community detection ∗Equal Contribution; work performed while both authors were visiting Singapore University of Technology and Design (SUTD). T1: Day at the races. T2: The new pitch era starts here. Bio Tweets Groundsman. Wolves fan. Horse racing enthusiast. A Commercial and Domestic Grounds Maintenance company T1: Pitch at wembley looks great. T2: Spurs will have best stadium in uk. Follow Figure 1: User and Network information on Twitter Microblog. (Yang and Leskovec, 2013), etc. Figure 1 shows two users connected on Twitter. By looking at their Bio and tweets, it can be inferred that these users share the same occupational interest. Profiling users can enhance service quality and improve product recommendation, and hence is a widely studied problem. User occupational class prediction is an important component of user profiling and a sub-task of user demographic feature prediction. Existing approaches to predicting Twitter users’ demographic attributes explore, select, and combine various features generated from text and network to achieve the best predictive performances in respective classification tasks (Han et al., 2013; Miller et al., 2012; Preot¸iuc-Pietro et al., 2015; Huang et al., 2015; Aletras and Chamberlain, 2018). The three categories of features are: account level features, tweet text features, and network based features. Past research have shown the distinctive usage of language across gender, age, location, etc. in tweets (Sloan et al., 2015; Cheng et al., 2010; Burger et al., 2011; Rao et al., 2010), which makes content based prediction effective. As for user occupational class prediction, Preot¸iuc-Pietro et al. (2015) built a dataset where 2634 users are assigned to hierarchical job categories. They used word cluster distribution features of content information to predict a user’s occupational group. Aletras and Chamberlain (2018) constructed a user’s followings connections to learn the user embedding as a feature input to the classification models. Considering the regional disparities of economic development stages, the major job categories may vary significantly across regions. Sloan et al. (2015) summarized occupation distribution of Twitter users in the UK by looking into their profiles. In this paper, we analyze the usefulness of a user’s network information over the user’s tweets for predicting its occupational group. We extend the existing dataset for occupation classification (Preot¸iuc-Pietro et al. (2015)) by introducing the network information about a user, i.e. follower/following IDs together with their Bio descriptions, and we construct a user-centric network to extract useful community and text based features. The acquired features from the network are then exploited using a graph neural network. The obtained results show the importance of a network information over tweet information from a user for such a task. 2 Graph Convolutional Network A Graph Convolutional Network (GCN) (Kipf and Welling, 2017) defines a graph-based neural network model f(X, A) with layer-wise propagation rules: ˆA = ˜D−1/2(A + λI) ˜D−1/2 (1) X(l+1) = σ( ˆAX(l)W (l) + b(l)) (2) where X is the feature matrix for all the nodes with X(0) being the initial feature input of size dnodes × dfeatures, A is the adjacency matrix of dimension dnodes × dnodes, ˜D is the degree matrix of A + λI, λ is a hyperparameter controlling the weight of a node against its neighbourhood, and W (l) and b(l) are trainable weights and bias for the l-th layer, respectively. In each layer of GCN, a node aggregates its direct neighbours’ features according to ˆA and linearly transforms the representation using W and b. A nonlinear activation function σ (e.g., ReLu) is then applied. The number of layers of GCN decides the number of hops away that the neighbours’ features will be smoothed over for each node. Gr SOC Users 1 Managers, Directors, Senior Officials 461 2 Professional Occ. 1,611 3 Associate Profess., Technical Occ. 926 4 Administrative Secretarial Occ. 162 5 Skilled Trades Occ. 768 6 Caring, Leisure, Other Service Occ. 259 7 Sales and Customer Service Occ. 58 8 Process, Plant, Machine Operatives 188 9 Elementary Occ. 124 Table 1: The table shows the major groups (left column) and categorized jobs with different sub-major groups (middle column) by SOC. The right-most column shows the number of main users in the data. 3 Experimental Setup 3.1 Data We base our work on a publicly available Twitter dataset that maps 5,191 users to 9 major occupational classes (Preot¸iuc-Pietro et al., 2015). The dataset contains user IDs (we call these users the main users henceforth) and the bag-of-words from tweets. The hierarchical structure of occupational classes in the data was defined based on the Standard Occupation Classification (SOC) from the UK1. To explore the role of network information in occupational class prediction, we extend the above dataset by crawling follower/following IDs (henceforth referred to as follow IDs) for each main ID (IDs corresponding to main users). For the crawled follow IDs, we further crawl their Bio descriptions. We refer to the extended dataset as ED. ED contains 4,557 main users with both followers and followings information. The remaining Twitter accounts could not be scrapped because of various reasons such as account suspension and protected tweets. Table 1 shows the occupational class distribution of the main users in the ED. In all our work, we discard the Bio information of the main users as these were used to annotate this dataset. We tokenize the Bio text of the follow IDs using the Glove Twitter pre-processing guidelines2. As for social network construction, we consider each follower/following relationship as an undirected edge. Based on the reasoning that the social network information is passed between main IDs 1http://www.ons.gov.uk/ 2https://nlp.stanford.edu/projects/ glove/preprocess-twitter.rb 2635 mainly through some common follow IDs, the follow IDs that only connect to very few main IDs will have minimum functionality in information flow. Thus, we decide to filter the graph by keeping the follow IDs with more than 10 connections to the main IDs. All connections between main IDs are retained. The filtering step results in 29 main IDs losing all their connections. For all such isolated main IDs, we retrieve all its follow IDs having at least one other main ID connection. After all these operations, we are able to construct an un-weighted graph in which all the main IDs are connected. The filtered graph contains 34,630 unique users (including 4,557 main IDs) and 586,303 edges. Although the main users are not collected to be connected to each other – only 2,550 main IDs have at least one direct connection to another main ID, we find that they often share common follow IDs which allows us to retrieve their social representations. To compare with previous works, we also construct a partial network dataset that contains only following IDs of all the 4,557 main IDs. We refer to this partial dataset as PD. PD adheres to the same network construction methodology as ED. We divide the dataset into training, development, and test sets using stratified split with the splitting ratio of 80%, 10%, and 10%. All the experimental results are reported on the same test set. The split information and the processed dataset ED can be found together with code on github: https://github.com/jqnap/ Twitter-Occupation-Prediction. 3.2 Features and Models Node Embeddings: To encode user-user social relationship of main IDs with the follow network, we learn latent representations of all IDs (node embedding) which can be easily exploited for the prediction task. The embeddings are learned by forming node sequences using Deep Walk (Perozzi et al., 2014). Based on the network processing strategy used in Aletras and Chamberlain (2018), we construct unweighted bipartite graphs using our filtered network. The two sides of a bipartite graph are follow IDs and main IDs respectively. Note that the main ID-main ID connections will break the bipartiteness. To resolve this, we duplicate the main ID nodes to the follow IDs’ side and then link connections within main IDs. We construct for both ED and PD, and obtain a full graph (fG) and a partial graph (pG) respectively. Next, we performed 10 random walks starting from each main ID, alternating between main ID and followers/followings with a walk length of 80. For each node, the walk sequence is used to generate embeddings using a similar approach to word2vec (Mikolov et al., 2013). We use the same hyper-parameters as in Aletras and Chamberlain (2018). Text Features: To have a valid comparison with existing approaches, we construct two sets of text features: (1) bag-of-clusters (Preot¸iuc-Pietro et al., 2015): we assign each word that appears in each main ID’s concatenated tweets document to its corresponding word cluster, where the word clusters are obtained by applying spectral clustering (Ng et al., 2002; Shi and Malik, 2000) to word embeddings. Next, we calculate the cluster assigning frequencies for each main ID. (2) bag-of-words (BOW): since the initial dataset used the Bio information of the main users to annotate their occupations, we remove all the Bio information of main users. We kept only the most frequent 5,000 words from the Bio (of other users) and another 5,000 words from tweets text as the dictionary of separate BOW vectors to the model. We feed the obtained text features and node embedding features to both the Logistic Regression (LR) classifier and the Support Vector Machine (SVM) classifier 3. Both classifiers are trained following the one-vsall approach for the 9-way classification task. ℓ2 regularization is used for LR, whose coefficient is tuned based on the development set. We use the RBF kernel for SVM, normalize the features before feeding them to SVM as inputs, and tune the regularization coefficient C using the development set. GCN: In the case of GCN (as shown in Figure 2), we use its transductive semi-supervised setting. The inputs are the adjacency matrix of all the network IDs and a feature matrix of the Bio’s bagof-words. Specifically, we keep the input feature vectors corresponding to the main IDs as null (all zeros), since their Bios were discarded. We experiment GCN with 2, 3 and 4 convolutional layers. The 3-layer GCN slightly outperformed the 3We use the scikit-learn implementations of LR and SVM classifiers: https://scikit-learn.org/ 2636 Adjacency (NxN) Feature (Nxf) N Users User User f 2K 2K 9 Softmax Probability distribution across class for users 3 Conv Layers Figure 2: GCN architecture for occupational class prediction. 2K is the best performing hidden size. 2-layer GCN and is on-par with the 4-layer GCN. We also test another setting where we do not use the Bio information: we keep the feature as a matrix of one-hot encoded vectors corresponding to all 34,630 IDs. For all the experiments, we set λ to 1 in Equation 1. 4 Results and Discussion 4.1 Text Features and Node Embeddings As shown in Table 2, we compare our results using network information with existing methods: bagof-clusters (Preot¸iuc-Pietro et al., 2015) and Deepwalk on the followings graph concatenated with bag-of-clusters (Aletras and Chamberlain, 2018). We first conduct experiments on our collected ED dataset with 4,557 main users using existing methods. The better accuracy among existing methods is given by the concatenated bag-ofclusters and Deepwalk embeddings: 55.0%. Next, we investigate the performance of bagof-words features from main ID tweets and follow Bios using logistic regression (LR) and support vector machines (SVM). From the experiments on tweets, we find that using the bag-ofwords features achieve comparable performance to using the bag-of-clusters features. Thus we opt for the bag-of-words representation in subsequent experiments. The optimized model using Bio text features outperforms using tweet content. It can be inferred that the Bio descriptions of follow accounts provide more useful information compared to tweets. The reason could be the higher noise in tweets, while people are comparatively more careful while writing their Bios. The next set of results uses follow network features. Based on Aletras and Chamberlain (2018), we perform deep walk with 32-dim learned node representations, and used it as input to LR and LR SVM Word Clusters (200)∗ 49.8 52.6 Clusters+DeepWalk-pG (200 + 32)∗ 51.3 55.0 Main ID tweets BOW (5, 000) 53.7 54.6 F-Bio (5, 000) 56.6 56.3 DeepWalk-fG (32) 51.5 55.3 DeepWalk-fG + F-Bio (32 + 5, 000) 56.6 57.5 GCN Bio BOW (34, 630 × 5, 000) 59.9 Adjacency (34, 630 × 34, 630) 61.0 Table 2: Performance in terms of accuracy percentage comparison of logistic regression (LR), support vector machines (SVM), and graph convolutional networks (GCN). The first two rows (marked with ∗) are existing approaches from Preot¸iuc-Pietro et al. (2015) and Aletras and Chamberlain (2018). The number in brackets are the dimension of the feature space. pG and fG refer to partial graph and full graph respectively. We use F-Bio to denote “Follower Bio BOW”. SVM. We achieve higher accuracy (55.3%) as compared to tweets BOW (54.6%). However, the model is less effective than using follow Bio BOW. Combining both node representations and follow Bio BOW features further boosts the accuracy to 57.5%. 4.2 GCN To analyze the importance of Bios in conjunction with social network information, we exploit graph convolutional networks. With an accuracy of 59.9%, the model exceedingly outperforms existing approaches on tweets and partial network information. Our best result 61.0% accuracy is achieved by using GCN with one-hot encoding for nodes, which is significantly higher than existing methods. This shows that GCN is able to exploit the rich topological information of network to learn social representations for users. We postulate that the GCN with Bio did not do better than just a one-hot encoding for nodes because the main users do not have Bios: so all the labeled nodes in the GCN have no Bios, which makes learning difficult. We visualize the GCN final layer representations of training set (big ovals) and test set (dark colored dots) in Figure 3a. It can be observed that many test data samples are mapped to the correct group of occupation, showing the capability of 2637 -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 (a) 2D t-SNE Visualization 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 (b) Confusion Matrix Accuracy GCN (Network) Tweets (SVM) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Fraction of training data (c) Accuracy vs Training data Figure 3: (a) A 2D t-SNE plot of final layer user represntations learned using GCN; (b) Confusion matrix of prediction made by GCN (rows and columns represent actual and predicted group, respectively); (c) Model performance vs fraction of training data used. GCN utilizing Twitter network information for the prediction task. To analyze wrongly mapped test samples, we observed confusion matrix as shown in Figure 3b. We see that group 4 is predicted as belonging to group 1 or 2. When we compare the jobs lying in groups 1, 2, and 4, we found that they contain similar types of sub-occupations, such as “financial account managers” and “finance officers”, or “engineers” and “engineering technicians”. The same phenomenon can be seen for group 9 and group 5. Figure 3c compares the performance of two models, using tweet only features (LR–tweets) and follow network features (GCN–Bio), based on a fraction of training samples used for model learning. Even with 10% of the labeled training data, GCN with Bio-BOW features achieves comparable accuracy to existing models as well as models trained on tweet BOW with all the training set. This shows the significance of a user’s network information. We analyze the predictions on test samples made by GCN with Bio feature input and GCN with the one-hot encoded input. We find that 11% of the test set’s main IDs are correctly classified by only one of the two GCNs. This suggests that Bio features provide complementary information to the one-hot encoded input. In this work, the acquired network is dense. In cases when network is sparse, one-hot representation of an ID seems infeasible while BOW may generalize for the larger graph. While occupational class prediction could be used to improve service quality, we note that the use of network information might result in unintended consequences such as racial and ethnicity based segregation in online spaces. To alleviate such concerns, it would be useful in future to incorporate explainable predictions with work such as (Xie and Lu, 2019), to further mitigate such risks involved. 5 Conclusion and Future Work Previous works have used tweets or a fraction of the network information to extract features for occupation classification. To analyze the importance of network information, we extended an existing Twitter dataset for a user’s social media connections (follow information). We showed that by using only follow information as an input to graph convolutional networks, one can achieve a significantly higher accuracy on the prediction task as compared to the existing approaches utilizing tweet-only information or partial network structure. Directions of future research include adaptation of our methods to a large scale, sparsely connected social network. One might also want to investigate the inductive settings of GCN (Hamilton et al., 2017) to predict demographic information of a user from outside the black network. Acknowledgments We would like to thank the reviewers for their helpful comments on our work. This work is supported by DSO grant DSOCL17061. References Nikolaos Aletras and Benjamin Paul Chamberlain. 2018. Predicting twitter user socioeconomic at2638 tributes with network and language information. In Proc. of Hypertext and Social Media. John D Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on twitter. In Proc. of EMNLP. Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: a content-based approach to geo-locating twitter users. In Proc. of CIKM. Ido Guy, Naama Zwerdling, Inbal Ronen, David Carmel, and Erel Uziel. 2010. Social media recommendation based on people and tags. In Proc. of SIGIR. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034. Bo Han, Paul Cook, and Timothy Baldwin. 2013. A stacking-based approach to twitter user geolocation prediction. In Proc. of ACL (System Demonstrations). Yanxiang Huang, Lele Yu, Xiang Wang, and Bin Cui. 2015. A multi-source integration framework for user occupation inference in social media systems. World Wide Web, 18(5):1247–1267. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proc. of ICLR. Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415– 444. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Zachary Miller, Brian Dickinson, and Wei Hu. 2012. Gender prediction on twitter using stream algorithms with n-gram character features. International Journal of Intelligence Science, 2(04):143. Andrew Y Ng, Michael I Jordan, and Yair Weiss. 2002. On spectral clustering: Analysis and an algorithm. In Advances in neural information processing systems, pages 849–856. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proc. of KDD. Daniel Preot¸iuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015. An analysis of the user occupational class through twitter content. In Proc. of ACL-IJCNLP. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in twitter. In Proc. of the 2nd international workshop on Search and mining user-generated contents. Jianbo Shi and Jitendra Malik. 2000. Normalized cuts and image segmentation. Departmental Papers (CIS), page 107. Luke Sloan, Jeffrey Morgan, Pete Burnap, and Matthew Williams. 2015. Who tweets? deriving the demographic characteristics of age, occupation and social class from twitter user meta-data. PloS one, 10(3):e0115545. Shangsheng Xie and Mingming Lu. 2019. Interpreting and understanding graph convolutional neural network using gradient-based attribution methods. CoRR, abs/1903.03768. Jaewon Yang and Jure Leskovec. 2013. Overlapping community detection at scale: a nonnegative matrix factorization approach. In Proc. of WSDM.
2019
252
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2639–2649 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2639 Domain Adaptive Dialog Generation via Meta Learning Kun Qian Univeristy of California, Davis [email protected] Zhou Yu Univeristy of California, Davis [email protected] Abstract Domain adaptation is an essential task in dialog system building because there are so many new dialog tasks created for different needs every day. Collecting and annotating training data for these new tasks is costly since it involves real user interactions. We propose a domain adaptive dialog generation method based on meta-learning (DAML). DAML is an end-to-end trainable dialog system model that learns from multiple rich-resource tasks and then adapts to new domains with minimal training samples. We train a dialog system model using multiple rich-resource singledomain dialog data by applying the modelagnostic meta-learning algorithm to dialog domain. The model is capable of learning a competitive dialog system on a new domain with only a few training examples in an efficient manner. The two-step gradient updates in DAML enable the model to learn general features across multiple tasks. We evaluate our method on a simulated dialog dataset and achieve state-of-the-art performance, which is generalizable to new tasks. 1 Introduction Modern personal assistants, such as Alexa and Siri, are composed of thousands of single-domain task-oriented dialog systems. Every dialog task is different, due to the specific domain knowledge. An end-to-end trainable dialog system requires thousands of dialogs for training. However, the availability of the training data is usually limited as real users have to be involved to obtain the training dialogs. Therefore, adapting existing rich-resource data to new domains with limited resource is an essential task in dialog system research. Transfer learning (Caruana, 1997a; Bengio, 2012; Cohn et al., 1994; Mo et al., 2018), few-shot learning (Salakhutdinov et al., 2012; Li et al., 2006; Norouzi et al., 2013; Socher et al., 2013) and meta-learning (Finn et al., 2017) are introduced in solving such data scarcity problem in machine learning. Because every dialog domain is very different from each other, generalize information from rich-resource domains to another low resource domain is difficult. Therefore, only a few studies have tackled domain adaptive end-to-end dialog training methods (Zhao and Esk´enazi, 2018). We propose DAML based on meta-learning to combine multiple dialog tasks in training, in order to learn general and transferable information that is applicable to new domains. Zhao and Esk´enazi (2018) introduces action matching, a learning framework that could realize zero-shot dialog generation (ZSDG), based on domain description, in the form of seed response. With limited knowledge of a new domain, the model trained on several rich-resource domains achieves both impressive task completion rate and natural generated response. Rather than action matching, we propose to use modelagnostic meta-learning (MAML) algorithm (Finn et al., 2017) to perform dialog domain adaptation. The MAML algorithm tries to build an internal representation of multiple tasks and maximize the sensitivity of the loss function when applied to new tasks, so that small update of parameters could lead to large improvement of new task loss value. This allows our dialog system to adapt to new domain successfully not only with little target domain data but also in a more efficient manner. The key idea of this paper is utilizing the abundant data in multiple resource domains and finding an initialization that could be accurately and quickly adapted to an unknown new domain with little data. We use the simulated data generated by SimDial (Zhao and Esk´enazi, 2018). Specifically, we use three domains: restaurant, weather, and bus information search, as source data and test the meta-learned parameter initialization against 2640 the target domain, movie information search. By modifying Sequicity (Lei et al., 2018), a seq2seq encoder-decoder network, improving it with a two-stage CopyNet (Gu et al., 2016), we implement the MAML algorithm to achieve an optimal initialization using dialog data from source domains. Then, we fine-tune the initialization towards the target domain with a minimal portion of dialog data using normal gradient descent. Finally, we evaluate the adapted model with testing data also from the target domain. We outperform the state-of-the-art zero-shot baseline, ZSDG (Zhao and Esk´enazi, 2018), as well as other transfer learning methods (Caruana, 1997b). We publish the code on the github1. 2 Related Works Task-oriented dialog systems are developed to assist users to complete specific tasks, such as booking restaurant or querying weather information. The traditional method to build a dialog system is to train modules separately (Chen et al., 2017) such as: natural language understanding (NLU) (Deng et al., 2012; Dauphin et al., 2014; Hashemi et al.), dialog state tracker (Henderson et al., 2014), dialog policy learning (Cuay´ahuitl et al., 2015; Young et al., 2013) and natural language generation (NLG) (Dhingra et al., 2017; Wen et al., 2015). Henderson et al. (2013) introduces the concept of belief tracker that tracks users’ requirements and constraints in the dialog across turns. Recently, more and more works combine all the modules into a seq2seq model for the reason of easier model update. Lei et al. (2018) has introduced a new end-to-end dialog system, sequicity, constructed on a two-stage CopyNet (Gu et al., 2016): one for the belief tracker and another one for the response generation. This model has fewer number of parameters and trains faster than the state-of-the-art baselines while outperforming baselines on two large-scale datasets. The traditional paradigm in machine learning research is to train a model for a specific task with plenty of annotated data. Obviously, it is not reasonable that large amount of data is still required to train a model from scratch if we already have models for similar tasks. Instead, we want to quickly adapt a trained model to a new task with a small amount of new data. Dialog adaptation 1https://github.com/qbetterk/sequicity.git has been explored in various dimensions. Shi and Yu (2018) introduces an end-to-end dialog system that adapts to user sentiment. Mo et al. (2018) and Genevay and Laroche (2016) also trains a user adaptive dialog systems using transfer learning. Recently, effective domain adaptation has been introduced for natural language generation in dialog systems (Tran and Nguyen, 2018; Wen et al., 2016). Some domain adaptation work has been done on dialog states tracking (Mrkˇsi´c et al., 2015) and dialog policy learning (Vlasov et al., 2018) as well. However, there is no recent work about domain adaptation for a seq2seq dialog system, except ZSDG Zhao and Esk´enazi (2018). ZSDG is a zero-shot learning method that adapts action matching to adapt models learned from multiple source domains to a new target domain only using its domain description. Different from ZSDG, we propose to adapt meta-learning to achieve similar domain adaption ability. Meta-learning aims at learning new tasks with few steps and little data based on well-known tasks. One way to realize meta-learning is to learn an optimal initialization that could be adapted to new task accurately and quickly with little data (Vinyals et al., 2016; Snell et al., 2017). Another way to learn the learning progress is to train a meta-learner to optimize the optimizer of original network for updating parameters (Andrychowicz et al., 2016; Grant et al., 2018). Meta-learning has been applied in various circumstances such as image classification (Santoro et al., 2016; Finn et al., 2017), machine translation (Gu et al., 2018), robot manipulation (Duan et al., 2016; Wang et al., 2016), etc. We propose to apply meta-learning algorithm on top of the sequicity model to achieve dialog domain adaptation. Specifically, we chose the recently introduced algorithm, model-agnostic meta-learning(MAML) (Finn et al., 2017), because it generalizes across different models. This algorithm is compatible with any model optimized with gradient descent, such as regression, classification and even policy gradient reinforcement learning. Moreover, this algorithm outperforms other state-of-the-art one-shot algorithms for image classification. 3 Problem Formulation Seq2Seq-based dialog models take the dialog context c as the input and generates a sentence r as the response. Given the abundant data in the K differ2641 Figure 1: (a) shows the classical gradient update steps. (b) shows how we use MAML to update model with gradient descent. The index numbers suggest the processing order of each step. ent source domains, we have the training data in each source domain Sk, denoted as: DSk train = {(c(k) n , r(k) n , Sk), n = 1...N}, k = 1...K we also denote the data in the target domain T as: DT train = {(cT n, rT n , T), n = 1...N′} where N′ << N and N′ is only 1% of N in our setting. During the training process, we generate a model Msource : C × Sk →R where C is the set of context and R is the set of system responses. For the adaptation, we fine-tune the model Msource with target domain training data DT train and obtain a new model Mtarget. Our primary goal is to learn a model that could perform well in the new target domain: Mtarget : Ctarget × T →Rtarget 4 Proposed Methods We first introduce how to combine the MAML algorithm and the sequicity model. As illustrated in the Figure 1, the typical gradient descent includes (1) combining training data and initialized model, (2) computing the objective loss and then (3) using the loss to update the model parameters. However, with MAML, there are two gradient update steps. (1) We first combine the initialized model M with training data (c(k), r(k)) from each source domain Sk separately. (2) For each dialog domain, we calculate the loss Lossk and them use it to update every new temporary domain model M′ k. (4) Again we use the data (c(k), r(k)) from each domain and its corresponding temporarily updated domain model M′ k to calculate a new loss Loss′ k in each domain, (6) then sum all the new domain loss to obtain the final loss. (7) Finally, we use the final loss to update the original model M. In the following part, we describe the implementation details of the MAML algorithm and the sequicity model separately. As illustrated in Algorithm 1, sequicity model is used to combine natural language understanding (NLU), dialog managing and response generation in a seq2seq fashion, while meta-learning is a method to adjust loss function value for better optimization. α and β in the algorithm are the learning rate. As mentioned in Section 3, c denotes the context and is the input to the model at each turn. In order to use the sequicity model, we format c as {Bt−1, Rt−1, Ut} at time t, where Bt−1 is the previous belief span at time t −1, Rt−1 is the last system response and Ut is the current user utterance. Sequicity model introduces belief spans to store values of all the informable slots and also record requestable slot names through the history. In this way, rather than put all the history utterances into a RNN to extract context features, we directly deal with the slots stored in the belief span as the representation of all history contexts. The belief span is more accurate and simple to represent the history context and needed to be updated in every turn. The informable and requestable slots are stored in the same span, but with different labels to avoid ambiguity. The context at time t = 1 contains an empty set as the former belief span B0, and an empty string as the previous system response R0 The intuition behind the MAML algorithm is that some internal representations are more trans2642 Algorithm 1 DAML Input: dataset on source domain DS train; α; β Output: optimal meta-learned model Randomly initialize model M while not done do for Sk ∈Source Domain do Sample data c(k) from DS train M′ k = M −α∇MLSk(M, c(k)) Evaluate LSk(M′ k, c(k)) end for M ←M −β∇M P Sk LSk(M′ k, c(k)) end while Function loss function L(M, c) return cross-entropy(M(c)) Function M(c(k) = {B(k) t−1, R(k) t−1, U(k) t }) h = Encoder(B(k) t−1, R(k) t−1, U(k) t ) Bt = BspanDecoder(h) Rt = ResponseDecoder(h, B(k) t , m(k) t ) return Rt ferable than others. This suggests that some internal features can be applied to multiple dialog domains rather than a single domain. Since MAML is compatible with any gradient descent based model, we denote the current generative dialog model as M, which can be randomly initialized. According to the algorithm, for each source domain Sk, certain size of training data is sampled. We input the training data (c(k), r(k)) into sequicity model and obtain generated system response. We adopt cross-entropy as the loss function for all the domains: LSk(M, c(k), r(k)) = |r(k)| X j=1 r(k) j · log PM(r(k) j ) For each source domain Sk, We use gradient descent to update and get a temporary model. M′ k ←M −α∇MLSk(M, c(k), r(k)) To be consistent with (Finn et al., 2017), we only update the model for one step. In this way, we have an updated model in each source domain, one step away from M. We may consider multiple steps of gradient update in the future work. Then, we compute the loss based on the updated model with the same training data in each source domain: Loss = LSk(M′ k, c(k), r(k)) After this step, we have meta loss value in each domain. We sum up the updated loss value from all source domains as the objective function of metalearning: min M Meta-Loss = min M X Sk LSk(M′ k, c(k), r(k)) Finally, we update the model to minimize the meta objective function: M ←M −β∇M X Sk LSk(M′ k, c(k), r(k)) Unlike common gradient, in MAML, the objective loss we use to update model is not calculated directly from the current model M′ k, but from the temporary model M′ k. The idea behind this operation is that the loss calculated from the updated model is obviously more sensitive to the changes in original domains, so that we learn more about the common internal representations of all source domains rather than the distinctive features of each domain. Then in the adaptation step, since the basic internal representation has already been captured, the model is sensitive to the unique features of the new domain. As a result, one or a few gradient steps and minimum amount of data are required to optimize the model to the new domain. The sequicity model is constructed based on a single seq2seq model incorporating copying mechanism and belief span to record dialog states. Given a context c in the form of {Bt−1, Rt−1, Ut}, the belief span Bt at time t is extracted based on the previous belief span Bt−1 at time t−1, the history response Rt−1 at time t −1 and the utterance Ut at time t: Bt = seq2seq(Bt−1, Rt−1, Ut) Then, we generate system response based on both context and belief span extracted before: Rt = seq2seq(Bt−1, Rt−1, Ut|Bt, mt) mt is a simple label that helps generate the response. It checks whether or not requested information is available in the database with constraints stored in Bt. mt has three possible values: no match, exact match and multiple match. mt = “no match” denotes that the system cannot find a match in the database given the constraints, then the system would initiate restart the conversation. mt = “exact match” indicates the system successfully retrieves the requested information and completes the task, then the system would 2643 Figure 2: Structure of dialog system end the conversation. mt = “multiple matches” means there are multiple items matches all the constraints, so more constraints are needed to reduce the range of search in the backend database. So the system will then output a question to elicit more information. The structure is illustrated in Figure 2 and it is compatible with any seq2seq model. To have a simple architecture, we adopt the basic encoderdecoder structure. Both encoder and decoder employ GRU with attention mechanism. The response is generated using belief span and utterance at the current time. To simplify the model, we let the belief extractor and response generator share the same encoder. So we reformulate the equations into: h = Encoder(Bt−1, Rt−1, Ut) Bt = BspanDecoder(h) Rt = ResponseDecoder(h, Bt, mt) We also need to apply the third attention-based GRU for the response decoding. Because the response and the utterance usually share some word tokens, the sequicity model also incorporates copy-attention mechanism. Originally, to decode an encoded vector, the model uses softmax to obtain a probability over vocabulary P vocab(v) where v ∈V . With copy-attention, the decoder not only considers the word generation probability distribution over vocabulary, but also the likelihood of copy the word from input sequence P copy(v) where v ∈V ∪Ut and Ut is the current user utterance in the input context c. Then the total probability of word v at ith token in the output sequence is calculated by summing these two probabilities (normalization is performed after the summation): Pi(v) = (1−g)·P vocab i (v)+g·P copy i (v), v ∈V ∪Ut The copy probability is calculated similarly in Gu et al. (2016) and is different for belief span decoder and response decoder. For the belief span decoder, the copy probability is calculated as: P copy i (v) = 1 Z |Ut| X j:uj=v eψ(uj) where Z is a normalization factor and uj is the jth word tokens in the utterance Ut. We only add the component when uj is the same as the target word v. ψ(uj) is computed by: ψ(uj) = σ((henc j )T W)hdec j where henc j is the hidden state in the encoder for the jth word as input, hdec j is the hidden state in the belief span decoder and W ∈Rd×d is the copyattention weight. For the response decoder, we apply the copy attention on the recently generated belief span Bt rather than utterance Ut: P copy i (v) = 1 Z′ |Bt| X j:bj=v eψ(bj) ψ(bj) = σ((hdec j )T W)hdec j where both hidden states come from belief span decoder. 5 Experiment We first introduce the dataset and the metrics used to evaluate our models. Then, we describe models evaluated in the experiments and their implementation details. 5.1 Dataset For a fair comparison with the state-of-the-art domain adaptation algorithm, ZSDG (Zhao and Esk´enazi, 2018), we use the dataset, SimDial, which first introduced to evaluate ZSDG. Please refer to Appendix A for an example dialog. There are in total six dialog domains in SimDial: restaurant, weather, bus, movie, restaurant-slot and restaurant-style, where restaurant-slot data has the 2644 same slot type and sentence generation templates as the restaurant task but a different slot vocabulary. Similarly, restaurant-style has the same slots but different natural language generation (NLG) templates compared to the restaurant domain. We choose restaurant, weather and bus as source domains, denoted as following the experiment setting of ZSDG in (Zhao and Esk´enazi, 2018). For each source domain, we have 900, 100, 500 conversations for training, validation and testing correspondingly, each of which has 9 turns and each utterance has 13 word tokens on average. The rest three domains are for evaluation, which are considered as target domains. The seed response used in ZSDG is a set of system utterances and corresponding labels. To achieve a fair comparison, we use dialog data of the same size for adaptation training. We generate 9 dialogs (1% of source domain) for each domain’s adaptation training, each averagely contains about 8.4 turns. So for each target domain, we assume we have around 76 system response, which is smaller than the 100 seed response, ZSDG used as domain description. For testing, we use 500 dialogs for each target model. Movie is chosen to be the new target domain for evaluation. Because movie has completely different NLG templates and dialog structure, sharing very few common traits with the source domains at the surface level. To avoid any random results in this few-shot learning setting, we report the average of ten random runs for all results. For further exploring the property of the proposed method, we have also generated one dialog for the one-shot experiment, 45 dialogs (5% of the size in source domain), 90 dialogs (10% of the size in source domain) study the adaptation efficiency of our methods. 5.2 Metrics There are three main metrics in our experiments: BLEU score, entity F1 score and adapting time. The first two are the most important and persuasive metrics used in Finn et al. (2017) has exhaustively demonstrated the MAML’s fast adaptation speed to new tasks. It could even achieve amazing performance with one step of gradient update incorporating with halfcheetah and ant. We would also like to count the number of epochs for adaptation to compare the adaptation speed between our methods and the baseline of transfer learning. • BLEU We use BLEU score (Papineni et al., 2002) to evaluate the quality of generated response sentences since generating natural language is also part of the task. • Entity F1 Score For each dialog, we compare the generated belief span and the Oracle one. Since belief span contains all the slots that constraints the response, this score also checks the completeness of tasks. • Adapting Time We count the number of epochs during the adaptation training. We only compare the adaptation with the data of the same size. 5.3 Baseline Models To evaluate the effectiveness of our model, we compare DAML with the following two baselines: • ZSDG (Zhao and Esk´enazi, 2018) is the state-of-the-art dialog domain adaptation model. This model strengthens the LSTMbased encoder-decoder with an action matching mechanism. The model samples 100 labeled utterances as domain description seeds for domain adaptation. • Transfer learning is applied on the sequicity model as the second baseline. We train the basic model by simply mixing all the data from source domains and then following Figure 1 (a) to update the model. We also enlarge the vocabulary with the training data in target domain. Besides, we implement one-shot learning version of this model by only using one target domain dialog for adaptation, as a comparison with the one-shot learning case of DAML. 5.4 Implementation details For all experiments, we use the pre-trained GloVe word embedding (Pennington et al., 2014) with a dimension of 50. We choose the one-layer GRU networks with a hidden size of 50 to construct the encoder and decoder. The model is optimized using Adam (Kingma and Ba, 2014) with a learning rate of 0.003. We reduce the learning rate to half if the validation loss increases. We set the batch (Ioffe and Szegedy, 2015) size to 32 and the dropout (Zaremba et al., 2014) rate to 0.5. 6 Results and Analysis Table 1 describes all the model performance. We denote testing data from the combination of 2645 In Domain ZSDG Transfer DAML Transfer-oneshot DAML-oneshot BLEU 70.1 51.8 51.8 51.1 53.7 Entity F1 79.9 88.5 91.4 87.6 91.2 Epoch 2.7 1.4 2.2 1.0 Unseen Slot ZSDG Transfer DAML Transfer-oneshot DAML-oneshot BLEU 68.5 43.3 (46.3) 41.7 (46.3) 40.8 (43.9) 40.0 (41.8) Entity F1 74.6 78.7 (78.5) 75 (79.2) 70.1 (67.7) 72.0 (73.0) Epoch 2.6 (2.4) 4.8 (3.4) 3.2 (2.6) 5.0 (3.0) Unseen NLG ZSDG Transfer DAML Transfer-oneshot DAML-oneshot BLEU 70.1 30.6 (32.4) 21.5 (26.0) 20.0 (21.5) 19.1 (19.1) Entity F1 72.9 82.2 (85.0) 77.5 (82.4) 82.8 (86.2) 69.0 (86.4) Epoch 3.2 (3.0) 3.2 (2.1) 12.3 (20.3) 4.7 (5.7) New Domain ZSDG Transfer DAML Transfer-oneshot DAML-oneshot BLEU 54.6 30.1 32.7 21.5 22.4 Entity F1 52.6 64.0 66.2 55.9 59.5 Epoch 5.6 4.5 14.2 5.8 Table 1: DAML outperforms both ZSDG and transfer learning when given similar target domain data. Even the one-shot DAML method achieves better results than ZSDG. Values in parenthesis are the results of the model with an extra step of fine-tuning on the restaurant domain in training. “In Domain” uses all three source domains (restaurant, weather and bus), while “New Domain” refers to the movie domain. “Unseen Slot” and “Unseen NLG” correspond to restaurant-slot and restaurant-style separately. restaurant, weather and bus domains as “In Domain” data since they are in the same domains as what we use to train. The data from movie domain is denoted as “New Domain” as it is unseen in training data. “Unseen Slot” and “Unseen NLG” represent restaurant-slot and restaurant-style domains correspondingly. To keep a fair comparison, both Transfer and DAML use 1% of source domain data (9 dialogs, in total 76 system responses), which is equal to the seed response that Zhao and Esk´enazi (2018) uses. We found that both transfer learning and DAML obtain better results than ZSDG. Especially for the “New Domain”, DAML achieves the entity F1 score of 66.2, 25.8% relative improvement compared with ZSDG. As for “In Domain” testing, DAML also obtains 14.4% improvement beyond ZSDG. However, our method does not get large improvement in the “Unseen slot” and “Unseen NLG” domains. We notice that these two domains are actually generated from one of the source domain (restaurant domain). So, even though the slots or templates are changed, they should still share some features with the original domain data. If we could take advantage of the original restaurant domain, the result should be improved. Following this intuition, in the “Unseen slot” domain and the “Unseen NLG” domain, we first fine-tune the model obtained from DAML with the original restaurant data in training, and then we do further fine-tune with the adaptation data. The results are further improved and presented in the parenthesis in Table 1. We see that in most cases, fine-tuning on restaurant data increases both the BLEU score and entity F1 score on the “Unseen Slot” and “Unseen NLG” domain. Finn et al. (2017) emphasizes that meta-learning obtains decent results with extremely small size of data, even in the one-shot cases. To verify this claim, we perform a one-shot version of the DAML training along with one-shot transfer learning by only using one target domain dialog. The result shows that even the one-shot case of DAML outperforms the ZSDG baseline in all cases except “Unseen slot” in entity F1. For the “Unseen NLG” domain, the DAML one-shot case even obtains the highest score. Considering DAML one-shot also having out-standing performance when adapted to “In Domain,” this suggests that the “Unseen NLG” domain is relatively close to the “In Domain.” And nearly every model achieves a similarly high score by fine-tuning the model which is already adapted to the “In Domain” data. Since the score of “In Domain” is already extremely high, we assume the model have learned the common features well. We also mention in the Sec 4 that MAML is sensitive to the new knowledge. Given that the model already learns the common features well ,in the one-shot setting, the model focuses on learning the unique features of the target domain, while the setting with 1% adaptation data still partially focus on some common features. And our method shows evident advantage not only with better scores but also with much fewer update steps. We observe in Table 1, DAML only needs one epoch to find the optimum when adapt2646 ing to the “In Domain.” Even for the “New Domain,” DAML only uses 5.8 epochs on average to converge, which is only 40% of epochs used in transfer learning. The epoch numbers in the Table 1 are not integers because all the results in our experiment are the average value of results from ten random runs, explained in Sec 5.1. Therefore, we conclude DAML is more efficient compared with simple transfer learning. DAML’s success mainly comes from three possible reasons. The first is the CopyNet mechanism. The copy model directly copy and output word tokens from the context, contributing to the high entity F1 score. The belief span also helps to improve the performance. With the belief span, we no longer need to extract slots from all the history utterances in each turn. Instead, we only need the previous slots, stored in belief span, that the copy model could directly deal with. This allows us to simplify our framework and improve the performance. Finally, the meta-learning allows our model to learn inner features of the dialog across different domains. movie Transfer DAML Entity F1 64.0 66.2 BLEU 30.1 32.7 restaurant Transfer DAML Entity F1 80.7 82.1 BLEU 46.1 47.9 bus Transfer DAML Entity F1 60.0 61.9 BLEU 32.0 35.9 weather Transfer DAML Entity F1 79.1 80.4 BLEU 38.9 43.3 Table 2: Performance on different dialog domains We also change different tasks used in source and target data to validate the robustness of our model. We use the leave-one-out approach to compare the difference between movie, restaurant, bus and weather domains. When we choose one of them as the target domain, we use the other three as the source domains. The size of the dataset (1% target data for adaptation) and model hyperparameters are keeping the same as the main experiment described above. We observe in the table 2, the restaurant domain achieves both the highest entity F1 score and the highest BLEU score, which means it is the easiest domain to adapt to. The bus domain receives the lowest entity F1 score and the movie domain holds the second lowest one, as well as the lowest BLEU score. This demonstrates that the movie domain is really a hard domain for adaptation and is worth being chosen as the target domain. Among all combinations, DAML outperforms the transfer learning algorithms in both Entity F1 and BLEU. Figure 3: The system performance improves when the size of the target data increases. Even the one-shot learning setting achieves decent performance. In addition, we investigate the impact of using different amount of target domain data on system performance. We use the best model trained on restaurant, bus and weather and test on the movie domain. The size of target data varies from one dialog in one-shot learning to 10% of the data, which is 90 dialogs. Figure 3 shows the system performance positively correlates with the amount of training data available in the target domain. We observe that both entity F1 and BLEU scores nearly converge when 4% of the data is used. Although 4% is three times the size of the seed response used in Zhao and Esk´enazi (2018), we notice that even the one-shot case of our model outperforms ZSDG in the new domain. This demonstrates our method’s capability to achieve good performance with only little target data. Although the DAML has demonstrated outstanding performance in dialog domain adaptation, it still cannot perfectly adapt to a new domain, especially when there is out of domain words in new domain, denoted as unk. If unk lies in the utterance, such as “system: Movie from what country?” “user: Movie from unk.” System can hardly extract the needed slot since it does not recognize the surface form of the slot, even if we recognize the unk as the entity. If unk appears in the belief span, when our system uses copy model to generate the new belief span based on the previous one, it is hard to handle the unk token. The model also has difficulties in handling complex utterances, especially when a sentence has corrections, such as: “new request. in 2000-2010. 2647 oh no, in 70s.” In this case, our system successfully adds only 70s to the belief span, mainly because the adverb in suggests 70s is a year. However, the system keeps the original slot year, leading to a no match result. Moreover, in the case “that’s wrong. i love western ones.”, our system is confused on what the pronoun “ones” refers to. So it does not recognize “western” is a dialog slot. 7 Conclusion and Future Work We propose a domain adaptive dialog generation method based on meta-learning(DAML). We also construct an end-to-end trainable dialog system that utilizes a two-step gradient update to obtain models that are more sensitive to new domains. We evaluate our model on a simulated dataset with multiple independent domains. DAML reaches the state-of-the-art performance in Entity F1 compared with a zero-shot learning method and a transfer learning method. DAML is an effective and robust method for training dialog systems with low-resources. The DAML also provides promising potential extension, such as applying DAML on reinforcement learning-based dialog system. We also plan to adapt DAML to multi-domain dialog tasks. References Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. 2016. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981–3989. Yoshua Bengio. 2012. Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 17–36. Rich Caruana. 1997a. Multitask learning. Mach. Learn., 28(1):41–75. Rich Caruana. 1997b. Multitask learning. Machine learning, 28(1):41–75. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. ACM SIGKDD Explorations Newsletter, 19(2):25–35. David Cohn, Les Atlas, and Richard Ladner. 1994. Improving generalization with active learning. Machine learning, 15(2):201–221. Heriberto Cuay´ahuitl, Simon Keizer, and Oliver Lemon. 2015. Strategic dialogue management via deep reinforcement learning. CoRR, abs/1511.08099. Yann Dauphin, G¨okhan T¨ur, Dilek Z. Hakkani-T¨ur, and Larry P. Heck. 2014. Zero-shot learning and clustering for semantic utterance classification. CoRR, abs/1401.0509. Li Deng, G¨okhan T¨ur, Xiaodong He, and Dilek Z. Hakkani-T¨ur. 2012. Use of kernel deep convex networks and end-to-end learning for spoken language understanding. 2012 IEEE Spoken Language Technology Workshop (SLT), pages 210–215. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. End-to-end reinforcement learning of dialogue agents for information access. In ACL. Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. 2016. Rl$ˆ2$: Fast reinforcement learning via slow reinforcement learning. CoRR, abs/1611.02779. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. CoRR, abs/1703.03400. Aude Genevay and Romain Laroche. 2016. Transfer learning for user adaptation in spoken dialogue systems. In AAMAS. Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. 2018. Recasting gradient-based meta-learning as hierarchical bayes. arXiv preprint arXiv:1801.08930. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. CoRR, abs/1603.06393. Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor O. K. Li. 2018. Meta-learning for low-resource neural machine translation. CoRR, abs/1808.08437. Homa B Hashemi, Amir Asiaee, and Reiner Kraft. Query intent detection using convolutional neural networks. Matthew Henderson, Blaise Thomson, and Jason D. Williams. 2014. The third dialog state tracking challenge. 2014 IEEE Spoken Language Technology Workshop (SLT), pages 324–329. Matthew Henderson, Blaise Thomson, and Steve Young. 2013. Deep neural network approach for the dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 467–471. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. 2648 Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1437–1447. Association for Computational Linguistics. Fei-Fei Li, Rob Fergus, and Pietro Perona. 2006. Oneshot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611. Kaixiang Mo, Yu Zhang, Shuangyin Li, Jiajun Li, and Qiang Yang. 2018. Personalizing a dialogue system with transfer reinforcement learning. In ThirtySecond AAAI Conference on Artificial Intelligence. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multidomain dialog state tracking using recurrent neural networks. arXiv preprint arXiv:1506.07190. Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg S Corrado, and Jeffrey Dean. 2013. Zero-shot learning by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Ruslan Salakhutdinov, Joshua Tenenbaum, and Antonio Torralba. 2012. One-shot learning with a hierarchical nonparametric bayesian model. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 195–206. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. 2016. Meta-learning with memory-augmented neural networks. In ICML. Weiyan Shi and Zhou Yu. 2018. Sentiment adaptive end-to-end dialog systems. CoRR, abs/1804.10731. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077–4087. Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems, pages 935–943. Van-Khanh Tran and Le-Minh Nguyen. 2018. Adversarial domain adaptation for variational neural language generation in dialogue systems. In COLING. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638. Vladimir Vlasov, Akela Drissner-Schmid, and Alan Nichol. 2018. Few-shot generalization across dialogue tasks. CoRR, abs/1811.11707. Jane X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, R´emi Munos, Charles Blundell, Dharshan Kumaran, and Matthew Botvinick. 2016. Learning to reinforcement learn. CoRR, abs/1611.05763. Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei hao Su, David Vandyke, and Steve J. Young. 2015. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In SIGDIAL Conference. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei hao Su, David Vandyke, and Steve J. Young. 2016. Multi-domain neural network language generation for spoken dialogue systems. In HLT-NAACL. Steve J. Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101:1160–1179. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329. Tiancheng Zhao and Maxine Esk´enazi. 2018. Zeroshot dialog generation with cross-domain latent actions. CoRR, abs/1805.04803. A SimDial Dataset Example Table 3 is an example dialog in the restaurant domain generated by SimDial. Both the domain and sentence complexity can be adjusted. For each turn, bot the informable and the requestable slots would be recorded and stored in the belief span. 2649 turn speaker utterances inform slots request slots 1 user What’s up? hmm I am looking for a restaurant. sys Which place? 2 user I uhm yeah I don’t care. Oh sorry, Philadelphia. loc,Philadelphia sys I believe you said Philadelphia. 3 user I have more requests. What kind of parking does it have? loc,Philadelphia; parking sys The restaurant has no parking. Anything else? food,Indian 4 user I have more requests. Is hmm ... it closed? loc,Philadelphia; opening sys No, It is open right now. What else can I do? food,Indian 5 user New request. I’m interested in food uhm at Seattle. loc,Seattle; sys Do you mean Indian? food,Indian 6 user Uh-huh. loc,Seattle; sys Restaurant 56 is a good choice. What else can I do? food,Indian 7 user Not done yet. What’s the average price? loc,Seattle; price sys The restaurant serves moderate food. food,Indian 8 user I have all I need. See you. loc,Seattle; sys See you next time. food,Indian Table 3: An example dialog generated from SimDial
2019
253
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2650–2660 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2650 Strategies for Structuring Story Generation Angela Fan FAIR, Paris LORIA, Nancy [email protected] Mike Lewis FAIR, Seattle [email protected] Yann Dauphin Google AI∗ [email protected] Abstract Writers often rely on plans or sketches to write long stories, but most current language models generate word by word from left to right. We explore coarse-to-fine models for creating narrative texts of several hundred words, and introduce new models which decompose stories by abstracting over actions and entities. The model first generates the predicate-argument structure of the text, where different mentions of the same entity are marked with placeholder tokens. It then generates a surface realization of the predicate-argument structure, and finally replaces the entity placeholders with context-sensitive names and references. Human judges prefer the stories from our models to a wide range of previous approaches to hierarchical text generation. Extensive analysis shows that our methods can help improve the diversity and coherence of events and entities in generated stories. 1 Introduction Stories exhibit structure at multiple levels. While existing language models can generate stories with good local coherence, they struggle to coalesce individual phrases into coherent plots or even maintain character consistency throughout a story. One reason for this failure is that classical language models generate the whole story at the word level, which makes it difficult to capture the high-level interactions between the plot points. To address this, we investigate novel decompositions of the story generation process that break down the problem into a series of easier coarse-tofine generation problems. These decompositions can offer three advantages: • They allow more abstract representations to be generated first, where challenging longrange dependencies may be more apparent. *Work done while at Facebook AI Research Figure 1: Proposed Model. Conditioned upon the prompt, we generate sequences of predicates and arguments. Then, a story is generated with placeholder entities such as ent0. Finally we replace the placeholders with specific references. • They allow specialized modelling techniques for the different stages, which exploit the structure of the specific sub-problem. • They are applicable to any textual dataset and require no manual labelling. Several hierarchical models for story generation have recently been proposed (Xu et al., 2018; Yao et al., 2019), but it is not well understood which properties characterize a good decomposition. We therefore implement and evaluate several representative approaches based on keyword extraction, sentence compression, and summarization. We build on this understanding to devise the proposed decomposition (Figure 1). Inspired by the classic model of Reiter and Dale (2000), our approach breaks down the generation process in three steps: modelling the action sequence, the story narrative, and lastly entities such as story characters. To model action sequences, we first generate the predicate-argument structure of the 2651 story by generating a sequence of verbs and arguments. This representation is more structured than free text, making it easier for the model learn dependencies across events. To model entities, we initially generate a version of the story where different mentions of the same entity are replaced with placeholder tokens. Finally, we rewrite these tokens into different references for the entity, based on both its previous mentions and the global story context. The models are trained on 300k stories from WRITINGPROMPTS (Fan et al., 2018), and we evaluate quality both in terms of human judgments and using automatic metrics. We find that our approach substantially improves story generation. Specifically, we show that generating the action sequence first makes the model less prone to generating generic events, leading to a much greater diversity of verbs. We also find that by using subword modelling for the entities, our model can produce novel names for locations and characters that are appropriate given the story context. 2 Model Overview The crucial challenge of long story generation lies in maintaining coherence across a large number of generated sentences—in terms of both the logical flow of the story and the characters and entities. While there has been much recent progress in left-to-right text generation, particularly using self-attentive architectures (Dai et al., 2018; Liu et al., 2018), we find that models still struggle to maintain coherence to produce interesting stories on par with human writing. We therefore introduce strategies to decompose neural story generation into coarse-to-fine steps to make modelling high-level dependencies easier to learn. 2.1 Tractable Decompositions In general, we can decompose the generation process by converting a story x into a more abstract representation z. The negative log likelihood of the decomposed problem is given by L = −log X z p(x|z)p(z). (1) We can generate from this model by first sampling from p(z) and then sampling from p(x|z). However, the marginalization over z is in general intractable, except in special cases where every x can only be generated by a single z (for example, if the transformation removed all occurrences of certain tokens). Instead, we minimize a variational upper bound of the loss by constructing a deterministic posterior q(z|x) = 1z=z∗, where z∗ can be given by running semantic role labeller or coreference resolution system on x. Put together, we optimize the following loss: z∗= arg max z p(z|x) (2) L ≤−log p(x|z∗) −log p(z∗) (3) This approach allows models p(z∗) and p(x|z∗) to be trained tractably and separately. 2.2 Model Architectures We build upon the convolutional sequence-tosequence architecture (Gehring et al., 2017). Deep convolutional networks are used as both the encoder and decoder. The networks are connected with an attention module (Bahdanau et al., 2015) that performs a weighted sum of the encoder output. The decoder uses a gated multi-head selfattention mechanism (Vaswani et al., 2017; Fan et al., 2018) to allow the model to refer to previously generated words and improve the ability to model long-range context. 3 Modelling Action Sequences To decompose a story into a structured form that emphasizes logical sequences of actions, we use Semantic Role Labeling (SRL). SRL identifies predicates and arguments in sentences, and assigns each argument a semantic role. This representation abstracts over different ways of expressing the same semantic content. For example, John ate the cake and the cake that John ate would receive identical semantic representations. Conditioned upon the prompt, we generate an SRL decomposition of the story by concatenating the predicates and arguments identified by a pretrained model (He et al., 2017; Tan et al., 2018)1 and separating sentences with delimiter tokens. We place the predicate verb first, followed by its arguments in canonical order. To focus on the main narrative, we retain only core arguments. Verb Attention Mechanism SRL parses are more structured than free text, enabling more structured models. To encourage the model to 1for predicate identification, we use https: //github.com/luheng/deep_srl, for SRL given predicates, we use https://github.com/XMUNLP/ Tagger 2652 Figure 2: Verb-Attention. To improve the model’s ability to condition upon past verbs, one head of the decoder’s self-attention mechanism is specialized to only attend to previously generated verbs. consider sequences of verbs, we designate one of the heads of the decoder’s multihead self-attention to be a verb-attention head (see Figure 2). By masking the self-attention appropriately, this verbattention head can only attend to previously generated verbs. When the text does not yet have a verb, the model attends to a zero vector. We show that focusing on verbs with a specific attention head generates a more diverse array of verbs and reduces repetition in generation. 4 Modelling Entities The challenge of modelling characters throughout a story is twofold: first, entities such as character names are rare tokens, which make them hard to model for neural language models. Human stories often feature imaginative, novel character or location names. Second, maintaining the consistency of a specific set of characters is difficult, as the same entity may be referenced by many different strings throughout a story—for example Bilbo Baggins, he, and the hobbit may refer to the same entity. It is challenging for existing language models to track which words refer to which entity purely using a language modelling objective. We address both problems by first generating a form of the story with different mentions of the same entity replaced by a placeholder token (e.g. ent0), similar to Hermann et al. (2015). We then use a sub-word seq2seq model trained to replace each mention with a reference, based on its context. The sub-word model is better equipped to model rare words and the placeholder tokens make maintaining consistency easier. 4.1 Generating Entity Anonymized Stories We explore two approaches to identifying and clustering entities: • NER Entity Anonymization: We use a named entity recognition (NER) model2 to 2https://spacy.io/api/entityrecognizer identify all people, organizations, and locations. We replace these spans with placeholder tokens (e.g. ent0). If any two entity mentions have an identical string, we replace them with the same placeholder. For example, all mentions of Bilbo Baggins will be abstracted to the same entity token, but Bilbo would be a separate abstract entity. • Coreference-based Entity Anonymization: The above approach cannot detect different mentions of an entity that use different strings. Instead, we use the Coreference Resolution model from Lee et al. (2018)3 to identify clusters of mentions. All spans in the same cluster are then replaced with the same entity placeholder string. Coreference models do not detect singleton mentions, so we also replace non-coreferent named entities with unique placeholders. 4.2 Generating Entity References in a Story We train models to replace placeholder entity mentions with the correct surface form, for both NER-based and coreference-based entity anonymised stories. Both our models use a seq2seq architecture that generates an entity reference based on its placeholder and the story. To better model the specific challenges of entity generation, we also make use of a pointer mechanism and sub-word modelling. Pointer Mechanism Generating multiple consistent mentions of rare entity names is challenging. To aid re-use of previous names for an entity, we augment the standard seq2seq decoder with a pointer-copy mechanism (Vinyals et al., 2015). To generate an entity reference, the decoder can either generate a new abstract entity token or choose to copy an already generated abstract entity token, which encourages the model to use consistent naming for the entities. 3https://github.com/kentonl/e2e-coref 2653 Figure 3: Input for Coreferent entity reference generation. The model has a representation of the entity context in a bag of words form, all previous predicted values for the same anonymized entity token, and the full text story. The green circle represents the entity mention the model is attempting to fill. To train the pointer mechanism, the final hidden state of the model h is used as input to a classifier pcopy(h) = σ(wcopy · h). wcopy is a fixed dimension parameter vector. When the model classifier predicts to copy, the previously decoded abstract entity token with the maximum attention value is copied. One head of the decoder multi-head selfattention mechanism is used as the pointer copy attention head, to allow the heads to specialize. Sub-word Modelling Entities are often rare or novel words, so word-based vocabularies can be inadequate. We compare entity generation using word-based, byte-pair encoding (BPE) (Sennrich et al., 2015), and character-level models. NER-based Entity Reference Generation Here, each placeholder string should map onto one (possibly multiword) surface form—e.g. all occurrences of the placeholder ent0 should map only a single string, such as Bilbo Baggins. We train a simple model that maps a combination placeholder token and story (with anonymized entities) to the surface form of the placeholder. While the placeholder can appear multiple times, we only make one prediction for each placeholder as they all correspond to the same string. Coreference-based Entity Reference Generation Generating entities based on coreference clusters is more challenging than for our NER entity clusters, because different mentions of the same entity may use different surface forms. We generate a separate reference for each mention by adding the following inputs to the above model: • A bag-of-words context window around the specific entity mention, which allows local context to determine if an entity should be a name, pronoun or nominal reference. • Previously generated references for the same entity placeholder. For example, if the model is filling in the third instance of ent0, it receives that the previous two generations for ent0 were Bilbo, him. Providing the previous entities allows the model to maintain greater consistency between generations. 5 Experimental Setup 5.1 Data We use the WRITINGPROMPTS dataset from (Fan et al., 2018) 4 of 300k story premises paired with long stories. Stories are on average 734 words, making the generation far longer compared to related work on storyline generation. In this work, we focus on the prompt to story generation aspect of this task. We assume models receive a humanwritten prompt, as shown in Figure 1. We follow the previous preprocessing of limiting stories to 1000 words and fixing the vocabulary size to 19,025 for prompts and 104,960 for stories. 5.2 Baselines We compare our results to the Fusion model from Fan et al. (2018) which generates the full story directly from the prompt. We also implement various decomposition strategies as baselines: • Summarization: We propose a new baseline that generates a summary conditioned upon the prompt and then a story conditioned upon the summary. Story summaries are obtained with a multi-sentence summarization model (Wu et al., 2019) trained on the full-text version of the CNN-Dailymail summarization corpus (Hermann et al., 2015; Nallapati et al., 2016; See et al., 2017)5 and applied to stories. • Keyword Extraction: We generate a series of keywords conditioned upon the prompt and 4https://github.com/pytorch/fairseq/ tree/master/examples/stories 5https://github.com/abisee/ cnn-dailymail 2654 Figure 4: Human evaluations of different decomposed models for story generation. We find that using SRL action plans and coreference-resolution to build entity clusters generates stories that are preferred by human judges. Decomposition Stage 1 −log p(z∗) Stage 2 −log p(x|z∗) Summary 4.20 5.09 Keyword 6.92 4.23 Compression 5.05 3.64 SRL Action Plan 2.72 3.95 NER Entity Anonymization 3.32 4.75 Coreference Anonymization 3.15 4.55 Table 1: Negative log likelihood of generating stories using different decompositions (lower is easier for the model). Stage 1 is the generation of the intermediate representation z∗, and Stage 2 is the generation of the story x conditioned upon z∗. Entity generation is with a word-based vocabulary to be consistent with the other models. then a story conditioned upon the keywords, based on Yao et al. (2019). Following Yao et al, we extract keywords with the RAKE algorithm (Rose et al., 2010)6. Yao et al. extract one word per sentence, but we find that extracting n = 10 keyword phrases per story worked well, as our stories are much longer. • Sentence Compression: Inspired by Xu et al. (2018), we generate a story with compressed sentences conditioned upon the prompt and then a story conditioned upon the compression. We use the same deletion-based compression data as Xu et al., from Filippova and Altun (2013)7. We train a seq2seq model to compress all non-dialog story sentences (as the training data does not contain much spoken dialogue). The compressed sentences are concatenated to form the compressed story. 6https://pypi.org/project/rake-nltk/ 7https://github.com/ google-research-datasets/ sentence-compression Figure 5: Average Longest Common Subsequence of Generated Stories with human-written stories in the training set. 5.3 Training We implement models using fairseq-py (Ott et al., 2019)8 in PyTorch and train Fan et al. (2018)’s convolutional architecture. We tune all hyperparameters on validation data. 5.4 Generation We suppress the generation of unknown tokens to ease human evaluation. For all evaluations, we require stories to be at least 150 words and cut off the story at the nearest sentence for stories longer than 250 words. We generate stories with temperature 0.8 and random top-k sampling method proposed in (Fan et al., 2018), where next words are sampled from the top k candidates rather than the entire vocabulary distribution. We set k = 10. 6 Experiments 6.1 Comparing Decomposition Strategies Automated Evaluation We compare the relative difficulty of modelling using each decomposition strategy by measuring the log loss of the different stages in Table 1. We observe that generating the SRL structure has a lower negative loglikelihood and so is much easier than generating 8https://github.com/pytorch/fairseq/ 2655 Figure 6: Our decomposition can generate more coherent stories than previous work. either summaries, keywords, or compressed sentences — a benefit of its more structured form. We find keyword generation is especially difficult as the identified keywords are often the more salient, rare words appearing in the story, which are challenging for neural seq2seq models to generate. This result suggests that rare words should appear mostly at the last levels of the decomposition. Further, we compare models with entityanonymized stories as an intermediate representation, either with NER-based or coreference-based entity anonymization. Entity references are then filled using a word-based model.9 Perhaps surprisingly, naming entities proves more difficult than creating the entity-anonymized stories—providing insight into the relative difficulty of different subproblems of story generation. Finally, we analyze the similarity of the generated stories with the stories in the training set. We quantify this by measuring the maximum and average longest common subsequence of tokens of a generated story with all human-written stories from the training set. High LCS values would indicate models are copying large subparts from existing stories rather than creatively writing new stories. Results shown in Figure 5 indicate that our proposed decomposition copies slightly less long sequences from the training set compared to the baselines — by separating verb and entity generation into distinct parts, we generate fewer long sequences already present in the training set. 9To make likelihoods are comparable across models. Human Evaluation To compare overall story quality using various decomposition strategies, we conduct human evaluation using a crowdworking platform. Judges are shown two different stories that were generated based on the same humanwritten prompt (but do not see the prompt). Evaluators are asked to mark which story they prefer. 100 stories are evaluated for each model by 3 different judges. To reduce variance, stories from all models are trimmed to 200 words. Figure 6 shows that human evaluators prefer our novel decompositions over a carefully tuned Fusion model from Fan et al. (2018) by about 60% in a blind comparison. We see additive gains from modelling actions and entities. In a second study, evaluators compared various baselines against stories generated by our strongest model, which uses SRL-based action plans and coreference-based entity anonymization. In all cases, our full decomposition is preferred. 6.2 Effect of SRL Decomposition Human-written stories feature a wide variety of events, while neural models are plagued by generic generations and repetition. Table 2 quantifies model performance on two metrics to assess action diversity: (1) the number of unique verbs generated, averaged across all stories (2) the percentage of diverse verbs, measured by the percent of all verbs generated in the test set that are not one of the top 5 most frequent verbs. A higher percentage indicates more diverse events.10 Our decomposition using the SRL predicateargument structure improves the model’s ability to generate diverse verbs. Adding verb attention leads to further improvement. Qualitatively, the model can often outline clear action sequences, as shown in Figure 7. However, all models remain far from matching the diversity of human stories. 6.3 Comparing Entity Reference Models We explored a variety of different ways to generate the full text of abstracted entities—using different amounts of context and different granularities of subword generation. To compare these models, we calculated their accuracy at predicting the correct reference in Table 3. Each model evaluates n = 10, 50, 100 different entities in the test set, 1 real and n−1 randomly sampled distractors. Mod10We identify verbs using Spacy: https://spacy. io/ 2656 Figure 7: Example generated action plan for the SRL + NER Entity Anonymization model. It shows a plausible sequence of actions for a character. Model # Unique Verbs % Diverse Verbs Human Stories 34.0 76.5 Fusion 10.3 61.1 Summary 12.4 60.6 Keyword 9.1 58.2 Compression 10.3 54.3 SRL 14.4 62.5 + verb-attention 15.9 64.9 Table 2: Action Generation. Generating the SRL structure improves verb diversity and reduces repetition. els must give the true mention the highest likelihood. We analyze accuracy on the first mention of an entity, an assessment of novelty, and subsequent references, an assessment of consistency. Effect of Sub-word Modelling Table 3 shows that modelling a character-level vocabulary for entity generation outperforms BPE and word-based models, because of the diversity of entity names. This result highlights a key advantage of multistage modelling: the usage of specialized modelling techniques for each sub-task. Effect of Additional Context Entity references should be contextual. Firstly, names must be appropriate for the story setting—Bilbo Baggins might be more appropriate for a fantasy novel. Subsequent references to the character may be briefer, depending on context—for example, he is more likely to be referred to as he or Bilbo than his full name in the next sentence. We compare three models ability to fill entities based on context (using coreferenceanonymization): a model that does not receive the story, a model that uses only leftward context (as in Clark et al. (2018)), and a model with access to the full story. We show in Table 3 that having access to the full story provides the best performance. Having no access to any of the story decreases ranking accuracy, even though the model still receives the local context window of the entity as input. The left story context model performs better, but looking at the complete story provides additional gains. We note that full-story context can only be provided in a multi-stage generation approach. Qualitative Examples Figure 8 shows examples of entity naming in three stories of different genres. We evaluate different genres to examine if generated entities adapt to the style of the story. We show that models can adapt to the context—for example generating The princess and The Queen when the context includes monarchy. 6.4 Effect of Entity Anonymization To understand the effectiveness of the entity generation models, we examine their performance by analyzing generation diversity. Diversity of Entity Names Human-written stories often contain many diverse, novel names for people and places. However, these tokens are rare and subsequently difficult for standard neural models to generate. Table 4 shows that the fusion model and baseline decomposition strategies generate very few unique entities in each story. Generated entities are often generic names such as John. Our proposed decompositions generate substantially more unique entities than strong baselines. Interestingly, we found that using coreference resolution for entity anonymization led to fewer unique entity names than generating the names independently. This result can be explained by the coreference-based model re-using previous names more frequently, as well as using more pronouns. Coherence of Entity Clusters Well structured stories will refer back to previously mentioned characters and events in a consistent manner. To evaluate if the generated stories have these characteristics, we examine the coreference properties in Table 5. We quantify the average number of coreference clusters and the diversity of entities within each cluster (e.g. the cluster Bilbo, he, the hobbit is more diverse than the cluster he, he, he). Our full model produces more non-singleton coreference chains, suggesting greater coherence, and also gives different mentions of the same entity more diverse names. However, both numbers are still lower than for human generated stories, indicating potential for future work. 2657 First Mentions Subsequent Mentions Model Rank 10 Rank 50 Rank 100 Rank 10 Rank 50 Rank 100 Word-Based 42.3 25.4 17.2 48.1 38.4 28.8 BPE 48.1 20.3 25.5 52.5 50.7 48.8 Character-level 64.2 51.0 35.6 66.1 55.0 51.2 No story 50.3 40.0 26.7 54.7 51.3 30.4 Left story context 59.1 49.6 33.3 62.9 53.2 49.4 Full story 64.2 51.0 35.6 66.1 55.0 51.2 Table 3: Accuracy at choosing the correct reference string for a mention, discriminating against 10, 50 and 100 random distractors. We break out results for the first mention of an entity (requiring novelty to produce an appropriate name in the context) and subsequent references (typically pronouns, nominal references, or shorter forms of names). We compare the effect of sub-word modelling and providing longer contexts. Model # Unique Entities Human Stories 2.99 Fusion 0.47 Summary 0.67 Keyword 0.81 Compression 0.21 SRL + NER Entity Anonymization 2.16 SRL + Coreference Anonymization 1.59 Table 4: Diversity of entity names. Baseline models generate few unique entities per story. Our decompositions generate more, but still fewer than human stories. Using coreference resolution to build entity clusters reduces diversity here—partly due to re-using existing names more, and partly due to greater use of pronouns. Model # Coref Chains Unique Names per Chain Human Stories 4.77 3.41 Fusion 2.89 2.42 Summary 3.37 2.08 Keyword 2.34 1.65 Compression 2.84 2.09 SRL + NER Entity Anonymization 4.09 2.49 SRL + Coreference Anonymization 4.27 3.15 Table 5: Analysis of non-singleton coreference clusters. Baseline models generate very few different coreference chains, and repetitive mentions within clusters. Our models generate larger and more diverse clusters. Qualitative Example Figure 9 displays a sentence constructed to require the generation of an entity as the final word. The fusion model does not perform any implicit coreference to associate the allergy with his dog. In contrast, coreference entity fill produces a high quality completion. 7 Related Work Decomposing natural language generation into several steps has been extensively explored (Reiter and Dale, 2000; Gatt and Krahmer, 2018). In classical approaches to text generation, various stages were used to produce final written text. For example, algorithms were developed to determine content and discourse at an abstract level, then sentence aggregation and lexicalization, and finally steps to resolve referring expressions (Hovy, 1990; Dalianis and Hovy, 1993; Wahlster et al., 1993; Ratnaparkhi, 2000; Malouf, 2000). Our work builds upon these approaches. Story Generation with Planning Story generation using a plan has been explored using many different techniques. Traditional approaches organized sequences of character actions with hand crafted models (Riedl and Young, 2010; Porteous and Cavazza, 2009). Recent work extended this to modelling story events (Martin et al., 2017; Mostafazadeh et al., 2016), plot graphs (Li et al., 2013), plot summaries (Appling and Riedl, 2009), story fragments or vignettes (Riedl, 2010), or used sequences of images (Huang et al., 2016) or descriptions (Jain et al., 2017). We build on previous work that decomposes generation. Xu et al. (2018) learn a skeleton extraction model and a generative model conditioned upon the skeleton, using reinforcement learning to train jointly. Zhou et al. (2018) train a storyline extraction model for news articles, but require supervision from manually annotated storylines. Yao et al. (2019) use RAKE (Rose et al., 2010) to extract storylines, and condition upon the storyline to write the story using dynamic and static schemas that govern if the storyline can change. Entity Language Models An outstanding challenge in text generation is modelling and tracking entities. Centering (Grosz et al., 1995) gives a theoretical account of how referring expressions for entities are chosen in discourse context. Named entity recognition has been incorporated into language models since at least Gotoh et al. (1999), and can improve domain adaptation (Liu and Liu, 2658 Figure 8: Generating entity references for different genres, using entity-anonymized human written stories. Models use the story context to fill in relevant entities. Color indicates coreferent clusters. Figure 9: Constructed sentence where the last word refers to an entity. The coreference model is able to track the entities, whereas the fusion model relies heavily on local context to generate the next words. 2007). Language models have been extended to model entities based on information such as entity type (Parvez et al., 2018). Recent work has incorporated learning representations of entities and other unknown words (Kobayashi et al., 2017), as well as explicitly model entities by dynamically updating these representations to track changes over time and context (Ji et al., 2017). Dynamic updates to entity representations are used in other story generation models (Clark et al., 2018). Non-Autoregressive Generation Our method proposes decomposing left-to-right generation into multiple steps. Recent work has explored non-autoregressive generation for more efficient language modeling and machine translation. Ford et al. (2018) developed two-pass language models, generating templates then filling in words. The partially filled templates could be seen as an intermediary representation similar to generating a compressed story. Other models allow arbitrary order generation using insertion operations (Gu et al., 2019; Stern et al., 2019) and Gu et al. (2017) explored parallel decoding for machine translation. In contrast, we focus on decomposing generation to focus on planning, rather than efficient decoding at inference time. 8 Conclusion We proposed an effective method for writing short stories by separating the generation of actions and entities. We show through human evaluation and automated metrics that our novel decomposition improves story quality. References D Scott Appling and Mark O Riedl. 2009. Representations for learning to summarize plots. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Elizabeth Clark, Yangfeng Ji, and Noah A Smith. 2018. Neural text generation in stories using entity representations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2250–2260. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2018. Transformer-xl: Language modeling with longer-term dependency. Hercules Dalianis and Eduard Hovy. 1993. Aggregation in natural language generation. In European Workshop on Trends in Natural Language Generation, pages 88–105. Springer. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, and George E Dahl. 2018. The importance of generation order in language modeling. arXiv preprint arXiv:1808.07910. 2659 Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61:65–170. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In Proc. of ICML. Yoshihiko Gotoh, Steve Renals, and Gethin Williams. 1999. Named entity tagged language models. In Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International Conference on, volume 1, pages 513–516. IEEE. Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational linguistics, 21(2):203–225. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Nonautoregressive neural machine translation. arXiv preprint arXiv:1711.02281. Jiatao Gu, Qi Liu, and Kyunghyun Cho. 2019. Insertion-based decoding with automatically inferred generation order. arXiv preprint arXiv:1902.01370. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and whats next. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693– 1701. Eduard H Hovy. 1990. Pragmatics and natural language generation. Artificial Intelligence, 43(2):153– 197. Ting-Hao Kenneth Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, et al. 2016. Visual storytelling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1233–1239. Parag Jain, Priyanka Agrawal, Abhijit Mishra, Mohak Sukhwani, Anirban Laha, and Karthik Sankaranarayanan. 2017. Story generation from sequence of independent short descriptions. arXiv preprint arXiv:1707.05501. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A Smith. 2017. Dynamic entity representations in neural language models. arXiv preprint arXiv:1708.00781. Sosuke Kobayashi, Naoaki Okazaki, and Kentaro Inui. 2017. A neural language model for dynamically representing the meanings of unknown words and entities in a discourse. arXiv preprint arXiv:1709.01679. Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-tofine inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 687–692. Boyang Li, Stephen Lee-Urban, George Johnston, and Mark Riedl. 2013. Story generation with crowdsourced plot graphs. Feifan Liu and Yang Liu. 2007. Unsupervised language model adaptation incorporating named entity information. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 672–679. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198. Robert Malouf. 2000. The order of prenominal adjectives in natural language generation. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. Lara J Martin, Prithviraj Ammanabrolu, William Hancock, Shruti Singh, Brent Harrison, and Mark O Riedl. 2017. Event representations for automated story generation with deep neural nets. arXiv preprint arXiv:1706.01331. Nasrin Mostafazadeh, Alyson Grealish, Nathanael Chambers, James Allen, and Lucy Vanderwende. 2016. Caters: Causal and temporal relation scheme for semantic annotation of event structures. In Proceedings of the Fourth Workshop on Events, pages 51–61. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. Md Rizwan Parvez, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2018. Building language models for text with named entities. arXiv preprint arXiv:1805.04836. 2660 Julie Porteous and Marc Cavazza. 2009. Controlling narrative generation with planning trajectories: the role of constraints. In Joint International Conference on Interactive Digital Storytelling, pages 234– 245. Springer. Adwait Ratnaparkhi. 2000. Trainable methods for surface natural language generation. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 194–201. Association for Computational Linguistics. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press. Mark O Riedl. 2010. Story planning: Creativity through exploration, retrieval, and analogical transformation. Minds and Machines, 20(4):589–614. Mark O Riedl and Robert Michael Young. 2010. Narrative planning: Balancing plot and character. Journal of Artificial Intelligence Research, 39:217–268. Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text Mining: Applications and Theory. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. arXiv preprint arXiv:1902.03249. Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2018. Deep semantic role labeling with self-attention. In AAAI Conference on Artificial Intelligence. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Proc. of NIPS. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Wolfgang Wahlster, Elisabeth Andr´e, Wolfgang Finkler, Hans-J¨urgen Profitlich, and Thomas Rist. 1993. Plan-based integration of natural language and graphics generation. Artificial intelligence, 63(1-2):387–427. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations. Jingjing Xu, Yi Zhang, Qi Zeng, Xuancheng Ren, Xiaoyan Cai, and Xu Sun. 2018. A skeletonbased model for promoting coherence among sentences in narrative story generation. arXiv preprint arXiv:1808.06945. Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Planand-write: Towards better automatic storytelling. In Association for the Advancement of Artificial Intelligence. Deyu Zhou, Linsen Guo, and Yulan He. 2018. Neural storyline extraction model for storyline generation from news articles. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1727–1736.
2019
254
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2661–2672 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2661 Argument Generation with Retrieval, Planning, and Realization Xinyu Hua, Zhe Hu, and Lu Wang Khoury College of Computer Sciences Northeastern University Boston, MA 02115 {hua.x, hu.zhe}@husky.neu.edu, [email protected] Abstract Automatic argument generation is an appealing but challenging task. In this paper, we study the specific problem of counterargument generation, and present a novel framework, CANDELA. It consists of a powerful retrieval system and a novel two-step generation model, where a text planning decoder first decides on the main talking points and a proper language style for each sentence, then a content realization decoder reflects the decisions and constructs an informative paragraph-level argument. Furthermore, our generation model is empowered by a retrieval system indexed with 12 million articles collected from Wikipedia and popular English news media, which provides access to highquality content with diversity. Automatic evaluation on a large-scale dataset collected from Reddit shows that our model yields significantly higher BLEU, ROUGE, and METEOR scores than the state-of-the-art and non-trivial comparisons. Human evaluation further indicates that our system arguments are more appropriate for refutation and richer in content. 1 Introduction Counter-argument generation aims to produce arguments of a different stance, in order to refute the given proposition on a controversial issue (Toulmin, 1958; Damer, 2012). A system that automatically constructs counter-arguments can effectively present alternative perspectives along with associated evidence and reasoning, and thus facilitate a more comprehensive understanding of complicated problems when controversy arises. Nevertheless, constructing persuasive arguments is a challenging task, as it requires an appropriate combination of credible evidence, rigorous logical reasoning, and sometimes emotional appeal (Walton et al., 2008; Wachsmuth et al., 2017a; Wang et al., 2017). A sample counter-argument List of exonerated death row inmates... there had been 156 exonerations of prisoners on death row in the United States since 1973... Original post: Death penalty is more rational than life in prison. ...I don't believe murderers and rapists can be successfully integrated... Counter-argument: In theory I agree with you. But in reality we will never have a perfect justice system. Unreliable evidence is used when there is no witnesses, which could result in wrongful convictions. In the US, there had been 156 death row inmates who were exonerated since 1973. If we execute them, we can never undo it. I hope it can change your view. The Grim Facts About Lethal Injection ...Our justice system is a joke and we are asking other people to... The problem of innocence in death penalty cases ...The evidence in death penalty cases is not always very strong. After all, in many murders, there are no surviving witnesses... Source: The New York Times Source: Wikipedia Source: The Wall Street Journal Figure 1: Sample counter-argument for a pro-death penalty statement from Reddit /r/ChangeMyView. The argument consists of a sequence of propositions, by synthesizing opinions and facts from diverse sources. Sentences in italics contain stylistic languages for argumentation purpose. for a pro-death penalty post is shown in Figure 1. As can be seen, a sequence of talking points on the “imperfect justice system” are presented: it starts with the fundamental concept, then follows up with more specific evaluative claim and supporting fact. Although retrieval-based methods have been investigated to construct counter-arguments (Sato et al., 2015; Reisert et al., 2015), they typically produce a collection of sentences from disparate sources, thus fall short of coherence and conciseness. Moreover, human always deploy stylistic languages with specific argumentative functions to promote persuasiveness, such as making a concessive move (e.g., “In theory I agree with you"). This further requires the generation system to have better control of the languages style. Our goal is to design a counter-argument generation system to address the above challenges and 2662 produce paragraph-level arguments with rich-yetcoherent content. To this end, we present CANDELA—a novel framework to generate CounterArguments with two-step Neural Decoders and ExternaL knowledge Augmentation.1 Concretely, CANDELA has three major distinct features: First, it is equipped with two decoders: one for text planning—selecting talking points to cover for each sentence to be generated, the other for content realization—producing a fluent argument to reflect decisions made by the text planner. This enables our model to produce longer arguments with richer information. Furthermore, multiple objectives are designed for our text planning decoder to both handle content selection and ordering, and select a proper argumentative discourse function of a desired language style for each sentence generation. Lastly, the input to our argument generation model is augmented with keyphrases and passages retrieved from a large-scale search engine, which indexes 12 million articles from Wikipedia and four popular English news media of varying ideological leanings. This ensures access to reliable evidence, high-quality reasoning, and diverse opinions from different sources, as opposed to recent work that mostly considers a single origin, such as Wikipedia (Rinott et al., 2015) or online debate portals (Wachsmuth et al., 2018b). We experiment with argument and counterargument pairs collected from the Reddit /r/ChangeMyView group. Automatic evaluation shows that the proposed model significantly outperforms our prior argument generation system (Hua and Wang, 2018) and other non-trivial comparisons. Human evaluation further suggests that our model produces more appropriate counter-arguments with richer content than other automatic systems, while maintaining a fluency level comparable to human-constructed arguments. 2 Related Work To date, the majority of the work on automatic argument generation leads to rule-based models, e.g., designing operators that reflect strategies from argumentation theory (Reed et al., 1996; Carenini and Moore, 2000). Information retrieval systems are recently developed to extract argu1Code and data are available at https://xinyuhua. github.io/Resources/acl19/. ments relevant to a given debate motion (Sato et al., 2015). Although content ordering has been investigated (Reisert et al., 2015; Yanase et al., 2015), the output arguments are usually a collection of sentences from heterogeneous information sources, thus lacking coherence and conciseness. Our work aims to close the gap by generating eloquent and coherent arguments, assisted by an argument retrieval system. Recent progress in sequence-to-sequence (seq2seq) text generation models has delivered both fluent and content rich outputs by explicitly conducting content selection and ordering (Gehrmann et al., 2018; Wiseman et al., 2018), which is a promising avenue for enabling end-to-end counter-argument construction (Le et al., 2018). In particular, our prior work (Hua and Wang, 2018) leverages passages retrieved from Wikipedia to improve the quality of generated arguments, yet Wikipedia itself has the limitation of containing mostly facts. By leveraging Wikipedia and popular news media, our proposed pipeline can enrich the factual evidence with high-quality opinions and reasoning. Our work is also in line with argument retrieval research, where prior effort mostly considers single-origin information source (Rinott et al., 2015; Levy et al., 2018; Wachsmuth et al., 2017b, 2018b). Recent work by Stab et al. (2018) indexes all web documents collected in Common Crawl, which inevitably incorporates noisy, lowquality content. Besides, existing work treats individual sentences as arguments, disregarding their crucial discourse structures and logical relations with adjacent sentences. Instead, we use multiple high-quality information sources, and construct paragraph-level passages to retain the context of arguments. 3 Overview of CANDELA Our counter-argument generation framework, as shown in Figure 2, has two main components: argument retrieval model (§ 4) that takes the input statement and a search engine, and outputs relevant passages and keyphrases, which are used as input for our argument generation model (§ 5) to produce a fluent and informative argument. Concretely, the argument retrieval component retrieves a set of candidate passages from Wikipedia and news media (§ 4.1), then further selects passages according to their stances towards 2663 0 1 1 Encoder Input statement: death penalty is more rational than life in prison... in theory i agree with you . but reality in we will never ... unreliable evidence is used when ... Attention Keyphrase Memory The New York Times -------------------------------------Hard selection (>0.5?) Keyphrase Selection (Planning) Output: 1 2 4 5 3 s0 sj zt {α1m } {α2m } {α3m } hi s1 s2 yp =<filler> yp =<content> yp =<content> 0 1 0 0 0 ... 0 1 0 0 0 ... 0.1 0.0 0.8 0.1 ... 0.9 0.1 0.9 0.2 0.1 ... 0.0 1 0 0 0 ... 0.1 0.9 0.1 0.1 ... 0.0 emb emb emb Hard selection (>0.5?) 2 <START> death penalty justice system death row ... life in prison (Realization) (Retrieval) ... ... Figure 2: Architecture of CANDELA. 1 Argument retrieval (§ 4): a set of passages are retrieved and ranked based on relevance and stance (§ 4.1, 4.3), from which 2 a set of keyphrases are extracted (§ 4.2), with both as input for argument generation. 3 The biLSTM encoder consumes the input statement and passages returned from step 1. 4 A text planning decoder outputs a representation per sentence, and simultaneously predicts an argumentative function and selects keyphrases to include for the next sentence to be generated (§ 5.2). 5 A content realization decoder produces the counter-argument (§ 5.3). the input statement (§ 4.3). A keyphrase extraction module distills the refined passages into a set of talking points, which comprise the keyphrase memory as additional input for generation (§ 4.2). The argument generation component first runs the text planning decoder (§ 5.2) to produce a sequence of hidden states, each corresponding to a sentence-level representation that encodes the selection of keyphrases to cover, as well as the predicted argumentative function for a desired language style. The content realization decoder (§ 5.3) then generates the argument conditioned on the sentence representations. 4 Argument Retrieval 4.1 Information Sources and Indexing We aim to build a search engine from diverse information sources with factual evidence and varied opinions of high quality. To achieve that, we use Common Crawl2 to collect a large-scale online news dataset covering four major English news media: The New York Times (NYT), The Washington Post (WaPo), Reuters, and The Wall Street Journal (WSJ). HTML files are processed using the open-source tool jusText (Pomikálek, 2011) to extract article content. We deduplicate articles and remove the ones with less than 50 words. We also download a Wikipedia 2http://commoncrawl.org/ Source # Articles # Passages Date Range Wikipedia 5,743,901 42,797,543 dump of 12/2016 WaPo 1,109,672 22,564,532 01/1997 - 10/2018 NYT 1,952,446 28,904,549 09/1895 - 09/2018 Reuters 1,052,592 9,913,400 06/2005 - 09/2018 WSJ 2,059,128 16,109,392 01/1996 - 09/2018 Total 11,917,739 120,289,416 Table 1: Statistics on information sources for argument retrieval. News media are sorted by ideological leanings from left to right, according to https: //www.adfontesmedia.com/. dump. About 12 million articles are processed in total, with basic statistics shown in Table 1. We segment articles into passages with a sliding window of three sentences, with a step size of two. We further constraint the passages to have at least 50 words. For shorter passages, we keep adding subsequent sentences until reaching the length limit. Per Table 1, 120 million passages are preserved and indexed with Elasticsearch (Gormley and Tong, 2015) as done in Stab et al. (2018). Query Formulation. For an input statement with multiple sentences, one query is constructed per sentence, if it has more than 5 content words (10 for questions), and at least 3 are distinct. For each query, the top 20 passages ranked by BM25 (Robertson et al., 1995) are retained, per medium. All passages retrieved for the input statement are merged and deduplicated, and they will 2664 be ranked as discussed in § 4.3. 4.2 Keyphrase Extraction Here we describe a keyphrase extraction procedure for both input statements and retrieved passages, which will be utilized for passage ranking as detailed in the next section. For input statement, our goal is to identify a set of phrases representing the issues under discussion, such as “death penalty” in Figure 1. We thus first extract the topic signature words (Lin and Hovy, 2000) for input representation, and expand them into phrases that better capture semantic meanings. Concretely, topic signature words of an input statement are calculated against all input statements in our training set with log-likelihood ratio test. In order to cover phrases with related terms, we further expand this set with their synonyms, hyponyms, hypernyms, and antonyms based on WordNet (Miller, 1994). The statements are first parsed with Stanford part-of-speech tagger (Manning et al., 2014). Then regular expressions are applied to extract candidate noun phrases and verb phrases (details in Appendix A.1). A keyphrase is selected if it contains: (1) at least one content word, (2) no more than 10 tokens, and (3) at least one topic signature word or a Wikipedia article title. For retrieved passages, their keyphrases are extracted using the same procedure as above, except that the input statement’s topic signature words are used as references again. 4.3 Passage Ranking and Filtering We merge the retrieved passages from all media and rank them based on the number of words in overlapping keyphrases with the input statement. To break a tie, with the input as the reference, we further consider the number of its topic signature words that are covered by the passage, then the coverage of non-stopword bigrams and unigrams. In order to encourage diversity, we discard a passage if more than 50% of its content words are already included by a higher ranked passage. In the final step, we filter out passages if they have the same stance as the input statement for given topics. We determine the stances of passages by adopting the stance scoring model proposed by Bar-Haim et al. (2017). More details can be found in Appendix A.2. 5 Argument Generation 5.1 Task Formulation Given an input statement X = {xi}, a set of passages, and a keyphrase memory M, our goal is to generate a counter-argument Y = {yt} of a different stance as X, xi and yt are tokens at timestamps i and t. Built upon the sequenceto-sequence (seq2seq) framework with input attention (Sutskever et al., 2014; Bahdanau et al., 2015), the input statement and the passages selected in § 4 are encoded by a bidirectional LSTM (biLSTM) encoder into a sequence of hidden states hi. The last hidden state of the encoder is used as the first hidden state of both text planning decoder and content realization decoder. As depicted in Figure 2, the counter-argument is generated as follows. A text planning decoder (§ 5.2) first calculates a sequence of sentence representations sj (for the j-th sentence) by encoding the keyphrases selected from the previous timestamp j −1. During this step, an argumentative function label is predicted to indicate a desired language style for each sentence, and a subset of the keyphrases are selected from M (content selection) for the next sentence. In the second step, a content realization decoder (§ 5.3) generates the final counter-argument conditioned on previously generated tokens and the corresponding sentence representation sj. 5.2 Text Planning Decoder Text planning is an important component for natural language generation systems to decide on content structure for the target generation (Lavoie and Rambow, 1997; Reiter and Dale, 2000). We propose a text planner with two objectives: selecting talking points from the keyphrase memory M, and choosing a proper argumentative function per sentence. Concretely, we train a sentence-level LSTM that learns to generate a sequence of sentence representations {sj} given the selected keyphrase set C(j) as input for the j-th sentence: sj = f(sj−1, X ek∈C(j) ek) (1) where f is an LSTM network, ek is the embedding for a selected phrase, represented by summing up all its words’ Glove embeddings (Pennington et al., 2014) in our experiments. Content Selection C(j). We propose an attention mechanism to conduct content selection and yield 2665 C(j) from the representation of the previous sentence sj−1 to encourage topical coherence. To allow the selection of multiple keyphrases, we use the sigmoid function to calculate the score: αjm = sigmoid(emW pasj−1) (2) where W pa are trainable parameters, keyphrases with αjm > 0.5 are included in C(j), and the keyphrase with top attention value is always selected. We further prohibit a keyphrase from being chosen for more than once in multiple sentences. For the first sentence s0, C(0) only contains <start>, whose embedding is randomly initialized. During training, the true labels of C(j) are constructed as follows: a keyphrase in M is selected for the j-th goldstandard argument sentence if they overlap with any content word. Argumentative Function Prediction yp j . As shown in Figure 1, humans often deploy stylistic languages to achieve better persuasiveness, e.g. agreement as a concessive move. We aim to inform the realization decoder about the choice of style, and thus distinguish between two types of argumentative functions: argumentative content sentence which delivers the critical ideas, e.g. “unreliable evidence is used when there is no witness”, and argumentative filler sentence which contains stylistic languages or general statements (e.g., “you can’t bring dead people back to life”). Since we do not have argumentative function labels, during training, we use the following rules to automatically label each sentence as content sentence if it has at least 10 words (20 for questions) and satisfy the following conditions: (1) it has at least two topic signature words of the input statement or a gold-standard counter-argument3, or (2) at least one topic signature word with a discourse marker at the beginning of the sentence. If the first three words in a content sentence contain a pronoun, the previous sentence is labeled as such too. Discourse markers are selected from PDTB discourse connectives (e.g., as a result, eventually, or in contrast). The full list is included in Appendix A.3. All other sentences become filler sentences. In the future work, we will consider utilizing learning-based methods, e.g., Hidey et al. (2017), to predict richer argumentative functions. 3When calculating topic signatures for gold-standard arguments, all replies in the training set are used as background. The argumentative function label yp j for the j-th sentence is calculated as follows: P(yp j |yp <j, X) = softmax(wT p (tanh (W po[cj; sj])) + bp) (3) cj = X em∈M αjmem (4) where αjm is the alignment score computed as in Eq. 2, cj is the attention weighted context vector, wp, W po, and bp are trainable parameters. 5.3 Content Realization Decoder The content realization decoder generates the counter-argument word by word, with another LSTM network fw. We denote the sentence id of the t-th word in the argument as J(t), then the sentence representation sJ(t) from the text planning decoder, together with the embedding of the previous generated token yt−1, are fed as input to calculate the hidden state zt: zt = f w(zt−1, tanh(W wpsJ(t) + W wwyt−1 + bw)) (5) The conditional probability of the next token yt is then computed over a standard softmax, with an attention mechanism applied on the encoder hidden states hi to obtain the context vector cw t : P(yt|y<t, X, sJ(t)) = softmax(wT w(tanh (W wo[cw t ; zt])) + bo) (6) cw t = |X| X i=1 βtihi (7) βti = softmax(hiW wazt) (8) where βti is the input attention, W wp, W ww, W wo, W wa, bo, ww, and bw are learnable. Reranking-based Beam Search. Our content realization decoder utilizes beam search enhanced with a reranking mechanism, where we sort the beams at the end of each sentence by the number of selected keyphrases that are generated. We also discard beams with n-gram repetition for n ≥4. 5.4 Training Objective Given all model parameters θ, our mixed objective considers the target argument (Larg(θ)), the argumentative function type (Lfunc(θ)), and the next sentence keyphrase selection (Lsel(θ)): 2666 L(θ) = Larg(θ) + γ · Lfunc(θ) + η · Lsel(θ) (9) Larg(θ) = − X (X,Y )∈D log P(Y |X; θ) (10) Lfunc(θ) = − X (X,Y p) log P(Y p|X; θ) (11) Lsel(θ) = − X Y p |Y p| X j=1 ( X em∈C(j) log(αjm) + X em̸∈C(j) log(1 −αjm)) (12) where D is the training corpus, (X, Y ) are input statement and counter-argument pairs, and Y p are the sentence function labels. αjm are keyphrase selection labels as computed in Eq. 2. For simplicity, we set γ and η as 1.0 in our experiments, while they can be further tuned as hyper-parameters. 6 Experimental Setups 6.1 Data Collection and Preprocessing We use the same methodology as in our prior work (Hua and Wang, 2018) to collect an argument generation dataset from Reddit /r/ChangeMyView.4 To construct input statement and counter-argument pairs, we treat the original poster (OP) of each thread as the input. We then consider the high quality root replies, defined as the ones awarded with ∆s or with more upvotes than downvotes (i.e., karma > 0). It is observed that each paragraph often makes a coherent argument. Therefore, these replies are broken down into paragraphs, and a paragraph is retained as a target argument to the OP if it has more than 10 words and at least one argumentative content sentence. We then identify threads in the domains of politics and policy, and remove posts with offensive languages. Most recent threads are used as test set. As a result, we have 11, 356 threads or OPs (217, 057 arguments) for training, 1, 774 (33, 318 arguments) for validation, and 1, 703 (36, 777 arguments) for test. They are split into sentences and then tokenized by the Stanford CoreNLP toolkit (Manning et al., 2014). Training Data Construction for Passages and Keyphrase Memory. Since no gold-standard annotation is available for the input passages and 4We further crawled 42, 649 threads from July 2017 to December 2018, compared to the previously collected dataset. keyphrases, we acquire training labels by constructing queries from the gold-standard arguments as described in § 4.1, and reranking retrieved passages based on the following criteria in order: (1) coverage of topic signature words in the input statement; (2) a weighted summation of the coverage of n-grams in the argument5; (3) the magnitude of stance score, where we keep the passages of the same polarity as the argument; (4) content word overlap with the argument; and (5) coverage of topic signature words in the argument. 6.2 System and Oracle Retrieved Passages For evaluation, we employ both system retrieved passages (i.e., constructing queries from OP) and KM (§ 4), and oracle retrieved passages (i.e., constructing queries from target argument) and KM as described in training data construction. Statistics on the final dataset are listed in Table 2. Training System Oracle Avg. # words per OP 383.7 373.0 373.0 Avg. # words per argument 66.0 65.1 65.1 Avg. # passage 4.3 9.6 4.2 Avg. # keyphrase 57.1 128.6 56.6 Table 2: Statistics on the datasets for experiments. 6.3 Comparisons In addition to a Retrieval model, where the top ranked passage is used as counter-argument, we further consider four systems for comparison. (1) A standard Seq2seq model with attention, where we feed the OP as input and train the model to generate counter-arguments. Regular beam search with the same beam size as our model is used for decoding. (2) A Seq2seqAug model with additional input of the keyphrase memory and ranked passages, both concatenated with OP to serve as the encoder input. The reranking-based decoder in our model is also implemented for SEQ2SEQAUG to enhance the coverage of input keyphrases. (3) An ablated SEQ2SEQAUG model where the passages are removed from the input. (4) We also reimplement the argument generation model in our prior work (Hua and Wang, 2018) (H&W) with PyTorch (Paszke et al., 2017), which is used for CANDELA implementation. H&W takes as input the OP and ranked passages, and then uses two 5We choose 0.5, 0.3, 0.2 as weights for 4-grams, trigrams, and bigrams, respectively. 2667 separate decoders to first generate all keyphrases and then the counter-argument. For our model, we also implement a variant where the input only contains the OP and the keyphrase memory. 6.4 Training Details For all models, we use a two-layer LSTM for all encoders and decoders with a dropout probability of 0.2 between layers (Gal and Ghahramani, 2016). All layers have 512-dimensional hidden states. We limit the input statement to 500 tokens, the ranked passages to 400 tokens, and the target counter-argument to 120 tokens. Our vocabulary has 50K words for both input and output, with 300-dimensional word embeddings initialized with GloVe (Pennington et al., 2014) and fine-tuned during model training. We use AdaGrad (Duchi et al., 2011) with a learning rate of 0.15 and an initial accumulator of 0.1 as the optimizer, with the gradient norm clipped to 2.0. Early stopping is implemented according to the perplexity on validation set. For all our models the training takes approximately 30 hours (40 epochs) on a Quadro P5000 GPU card, with a batch size of 64. For beam search, we use a beam size of 5, tuned from {5, 10, 15} on validation. We also pre-train a biLSTM for encoder based on all OPs from the training set, and an LSTM for content realization decoder based on two sources of data: 353K counter-arguments that are high quality root reply paragraphs extended with posts of non-negative karma, and 2.4 million retrieved passages randomly sampled from the training set. Both are trained as done in Bengio et al. (2003). We then use the first layer’s parameters to initialize all models, including our comparisons. 7 Results and Analysis 7.1 Automatic Evaluation We employ ROUGE (Lin, 2004), a recall-oriented metric, BLEU (Papineni et al., 2002), based on n-gram precision, and METEOR (Denkowski and Lavie, 2014), measuring unigram precision and recall by considering synonyms, paraphrases, and stemming. BLEU-2, BLEU-4, ROUGE-2 recall, and METEOR are reported in Table 3 for both setups. Under system setup, our model CANDELA statistically significantly outperforms all comparisons and the retrieval model in all metrics, based on a randomization test (Noreen, 1989) (p < unigram bigram trigram 0 25 50 75 100 125 150 #distinct n-grams per argument Human Retrieval Seq2seq Seq2seqAug HW (2018) CANDELA Figure 3: Average number of distinct n-grams per argument. K 100 500 1000 2000 HUMAN 44.1 25.8 18.5 12.0 RETRIEVAL 50.6 33.3 26.0 18.6 SEQ2SEQ 25.0 7.5 3.2 1.2 SEQ2SEQAUG 28.2 9.2 4.6 1.8 H&W (2018) 38.6 24.0 19.5 16.2 CANDELA 30.0 10.5 5.3 2.3 Figure 4: Percentage of words in arguments that are not in the top-K (K = 100, 500, 1000, 2000) frequent words seen in training. Darker color indicates higher portion of uncommon words found in the arguments. 0.0005). Furthermore, our model generates longer sentences whose lengths are comparable with human arguments, both with about 22 words per sentence. This also results in longer arguments. Under oracle setup, all models are notably improved due to the higher quality of reranked passages, and our model achieves statistically significantly better BLEU scores. Interestingly, we observe a decrease of ROUGE and METEOR, but a marginal increase of BLEU-2 by removing passages from our model input. This could be because the passages introduce divergent content, albeit probably on-topic, that cannot be captured by BLEU. Content Diversity. We further measure whether our model is able to generate diverse content. First, borrowing the diversity measurement from dialogue generation research (Li et al., 2016), we report the average number of distinct n-grams per argument under system setup in Figure 3. Our system generates more unique unigrams and bigrams than other automatic systems, underscoring its capability of generating diverse content. Our model also maintains a comparable typetoken ratio (TTR) compared to systems that generate shorter arguments, e.g., a 0.79 for bigram TTR of our model versus 0.83 and 0.84 for SEQ2SEQAUG and SEQ2SEQ. RETRIEVAL, con2668 w/ System Retrieval w/ Oracle Retrieval B-2 B-4 R-2 MTR #Word #Sent B-2 B-4 R-2 MTR #Word #Sent HUMAN 66 22 66 22 RETRIEVAL 7.55 1.11 8.64 14.38 123 23 10.97 3.05 23.49 20.08 140 21 Comparisons SEQ2SEQ 6.92 2.13 13.02 15.08 68 15 6.92 2.13 13.02 15.08 68 15 SEQ2SEQAUG 8.26 2.24 13.79 15.75 78 14 10.98 4.41 22.97 19.62 71 14 w/o psg 7.94 2.28 10.13 15.71 75 12 9.89 3.34 14.20 18.40 66 12 H&W (2018) 3.64 0.92 8.83 11.78 51 12 8.51 2.86 18.89 17.18 58 12 Our Models CANDELA 12.02∗2.99∗14.93∗16.92∗119 22 15.80∗5.00∗23.75 20.18 116 22 w/o psg 12.33∗2.86∗14.53∗16.60∗123 23 16.33∗4.98∗23.65 19.94 123 23 Table 3: Main results on argument generation. We report BLEU-2 (B-2), BLEU-4 (B-4), ROUGE-2 (R-2) recall, METEOR (MTR), and average number of words per argument and per sentence. Best scores are in bold. ∗: statistically significantly better than all comparisons (randomization approximation test (Noreen, 1989), p < 0.0005). Input is the same for SEQ2SEQ for both system and oracle setups. taining top ranked passages of human-edited content, produces the most distinct words. Next, we compare how each system generates content beyond the common words. As shown in Figure 4, human-edited text, including goldstandard arguments (HUMAN) and retrieved passages, tends to have higher usage of uncommon words than automatic systems, suggesting the gap between human vs. system arguments. Among the four automatic systems, our prior model (Hua and Wang, 2018) generates a significantly higher portion of uncommon words, yet further inspection shows that the output often includes more offtopic information. 7.2 Human Evaluation Human judges are asked to rate arguments on a Likert scale of 1 (worst) to 5 (best) on the following three aspects: grammaticality—denotes language fluency; appropriateness—indicates if the output is on-topic and on the opposing stance; content richness—measures the amount of distinct talking points. In order to promote consistency of annotation, we provide descriptions and sample arguments for each scale. For example, an appropriateness score of 3 means the counterargument contains relevant words and is likely to be on a different stance. The judges are then asked to rank all arguments for the same input based on their overall quality. We randomly sampled 43 threads from the test set, and hired three native or proficient English speakers to evaluate arguments generated by SEQ2SEQAUG, our prior argument generation Gram. Appr. Cont. Top-1 Top-2 HUMAN 4.95 4.23 4.39 75.8% 85.8% RETRIEVAL 4.85 3.04 3.68 17.5% 55.8% SEQ2SEQAUG 4.83 2.67 2.47 1.7% 22.5% H&W (2018) 3.86 2.27 2.10 1.7% 7.5% CANDELA 4.59 2.97 2.93∗ 3.3% 28.3% Table 4: Human evaluation on grammaticality (Gram), appropriateness (Appr), and content richness (Cont.), on a scale of 1 to 5 (best). The best result among automatic systems is highlighted in bold, with statistical significance marked with ∗(approximation randomization test, p < 0.0005). The highest standard deviation among all is 1.0. Top-1/2: % of evaluations a system being ranked in top 1 or 2 for overall quality. model (H&W), and the new model CANDELA, along with gold-standard HUMAN arguments and the top passage by RETRIEVAL. Results. The first 3 examples are used only for calibration, and the remaining 40 are used to report results in Table 4. Inter-annotator agreement scores (Krippendorff’s α) of 0.44, 0.58, 0.49 are achieved for the three aspects, implying general consensus to intermediate agreement. Our system obtains the highest appropriateness and content richness among all automatic systems. This confirms the previous observation that our model produces more informative argument than other neural models. SEQ2SEQAUG has a marginally better grammaticality score, likely due to the fact that our arguments are longer, and tend to contain less fluent generation towards the end. Furthermore, we see that human arguments are 2669 ranked as the best in about 76% of the evaluation, followed by RETRIEVAL. Our model is more likely to be ranked top than any other automatic models. Especially, our model is rated better than either HUMAN or RETRIEVAL, i.e., human-edited text, in 39.2% of the evaluations, compared to 34.2% for SEQ2SEQAUG and 13.3% for our prior model. 7.3 Sample Arguments and Discussions We show sample outputs of different systems alongside human constructed counter-argument in Figure 5. As can be seen, our system arguments cover many relevant talking points, including the idea of “taking care of children” that is also used by human. It further illustrates the effectiveness of our retrieval system and the usage of keyphrase selection as part of text planning to guide argument generation. Moreover, we also observe that our model generation is able to switch between argumentative content sentence and filler sentence, though better control is needed to improve coherence. Meanwhile, SEQ2SEQ frequently echos words from OP, and both SEQ2SEQ and SEQ2SEQAUG suffer from the problems of “hallucination” (e.g., the first sentence in SEQ2SEQAUG) and repetition (e.g., the second and third sentences in SEQ2SEQ). Nonetheless, there is a huge space for improvement. First, our model tends to overuse negation, such as “this is not to say that it is unfair...”. It is likely due to its overfitting on specific stylistic languages, e.g., negation is often observed for refutation in debates (Wang et al., 2017). Second, human arguments have significantly better organization and often deploy complicated argumentation strategies (Wachsmuth et al., 2018a), which so far is not well captured by any automatic system. Both points inspire future work on (1) controlling of the language styles and corresponding content, and (2) mining argumentation structures for use in guiding generation with better planning. 8 Conclusion We present a novel counter-argument generation framework, CANDELA. Given an input statement, it first retrieves arguments of different perspectives from millions of high-quality articles collected from diverse sources. An argument generation component then employs a text planning decoder to conduct content selection and specify a OP: The wage gap isn’t about gender. (...) So one factor definitely the fact that there are some government jobs that are seriously dominated by women and happen to pay really low (...) Human: Children are the first factor. I work in a traditionally male-dominated field, and questions are asked to determine familial status, even though those are illegal. Employers know single moms have to stay home if their child is sick. They know single moms can only work during daycare hours. They know single moms are unavailable for weekend and late night work. They know single moms cannot travel. The workplace is very family-unfriendly. Seq2seq: the problem with this is that there are a lot of people who do n’t want to hire women . there are plenty of women who do n’t have the money to pay for it . the problem is that women do n’t need to pay for their work . they do n’t have to worry about it . it ’s a matter of money . Seq2seqAug: men and women are not 39.5 % of the pay gap . the problem is not that women are more likely to be victims of sexism , but rather that they are more natural good-looking/attractive action . this is not the case . CANDELA: the problem with this argument is that the wage gap does not have to do with the gender pay gap . it is a fact that women are more likely to be able to take care of their children than their male counterparts . this is not to say that it is unfair to assume that women are being paid less than men , but that does not mean that it is not the case that women are discriminated against . it is not a matter of the wage gap , it is a matter of opinion . it is the job of the employer to make sure that the job is not the same as the other Keyphrase Memory: wage gap; discrimination; gender pay gaps; raise the child; male colleagues; paid maternity leave; underlying gender discrimination . . . Figure 5: Sample arguments generated by different systems along with a sample human argument. For our model CANDELA, additionally shown are the keyphrase memory with selected phrases in color, and argumentative filler sentence in italics. suitable language style at sentence-level, followed by a content realization decoder to produce the final argument. Automatic evaluation and human evaluation indicate that our model generates more proper arguments with richer content than nontrivial comparisons, with comparable fluency to human-edited content. Acknowledgements This research is supported in part by National Science Foundation through Grants IIS-1566382 and IIS-1813341. We thank Varun Raval for helping with data processing and search engine indexing. We are grateful to the three anonymous reviewers for their constructive suggestions. 2670 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). Roy Bar-Haim, Indrajit Bhattacharya, Francesco Dinuzzo, Amrita Saha, and Noam Slonim. 2017. Stance classification of context-dependent claims. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 251–261. Association for Computational Linguistics. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Giuseppe Carenini and Johanna Moore. 2000. A strategy for generating evaluative arguments. In INLG’2000 Proceedings of the First International Conference on Natural Language Generation, pages 47–54, Mitzpe Ramon, Israel. Association for Computational Linguistics. T Edward Damer. 2012. Attacking faulty reasoning. Cengage Learning. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 376–380, Baltimore, Maryland, USA. Association for Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1019–1027. Curran Associates, Inc. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics. Clinton Gormley and Zachary Tong. 2015. Elasticsearch: The definitive guide: A distributed realtime search and analytics engine. " O’Reilly Media, Inc.". Christopher Hidey, Elena Musi, Alyssa Hwang, Smaranda Muresan, and Kathy McKeown. 2017. Analyzing the semantic types of claims and premises in an online persuasive forum. In Proceedings of the 4th Workshop on Argument Mining, pages 11–21. Association for Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. ACM. Xinyu Hua and Lu Wang. 2018. Neural argument generation augmented with externally retrieved evidence. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 219–230. Association for Computational Linguistics. Benoit Lavoie and Owen Rambow. 1997. A fast and portable realizer for text generation systems. In Fifth Conference on Applied Natural Language Processing. Dieu-Thu Le, Cam Tu Nguyen, and Kim Anh Nguyen. 2018. Dave the debater: a retrieval-based and generative argumentative dialogue agent. In Proceedings of the 5th Workshop on Argument Mining, pages 121–130. Association for Computational Linguistics. Ran Levy, Ben Bogin, Shai Gretz, Ranit Aharonov, and Noam Slonim. 2018. Towards an argumentative content search engine using weak supervision. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2066–2081. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60, Baltimore, Maryland. Association for Computational Linguistics. George A. Miller. 1994. Wordnet: A lexical database for english. In HUMAN LANGUAGE TECHNOLOGY: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994. 2671 Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Adam Paszke, Sam Gross, Soumith Chintala, and Gregory Chanan. 2017. Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration. PyTorch: Tensors and dynamic neural networks in Python with strong GPU acceleration. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Jan Pomikálek. 2011. Removing boilerplate and duplicate content from web corpora. Ph.D. thesis, Masaryk university, Faculty of informatics, Brno, Czech Republic. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer. Chris Reed, Derek Long, and Maria Fox. 1996. An architecture for argumentative dialogue planning. In International Conference on Formal and Applied Practical Reasoning, pages 555–566. Springer. Paul Reisert, Naoya Inoue, Naoaki Okazaki, and Kentaro Inui. 2015. A computational approach for generating toulmin model argumentation. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 45–55, Denver, CO. Association for Computational Linguistics. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence - an automatic method for context dependent evidence detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 440–450, Lisbon, Portugal. Association for Computational Linguistics. Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. Nist Special Publication Sp, 109:109. Misa Sato, Kohsuke Yanai, Toshinori Miyoshi, Toshihiko Yanase, Makoto Iwayama, Qinghua Sun, and Yoshiki Niwa. 2015. End-to-end argument generation system in debating. In Proceedings of ACLIJCNLP 2015 System Demonstrations, pages 109– 114, Beijing, China. Association for Computational Linguistics and The Asian Federation of Natural Language Processing. Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, and Iryna Gurevych. 2018. Argumentext: Searching for arguments in heterogeneous sources. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 21–25. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Stephen Edelston Toulmin. 1958. The use of argument. Cambridge University Press. Henning Wachsmuth, Nona Naderi, Ivan Habernal, Yufang Hou, Graeme Hirst, Iryna Gurevych, and Benno Stein. 2017a. Argumentation quality assessment: Theory vs. practice. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 250–255. Association for Computational Linguistics. Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017b. Building an argument search engine for the web. In Proceedings of the 4th Workshop on Argument Mining, pages 49–59. Association for Computational Linguistics. Henning Wachsmuth, Manfred Stede, Roxanne El Baff, Khalid Al Khatib, Maria Skeppstedt, and Benno Stein. 2018a. Argumentation synthesis following rhetorical strategies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3753–3765. Association for Computational Linguistics. Henning Wachsmuth, Shahbaz Syed, and Benno Stein. 2018b. Retrieval of the best counterargument without prior topic knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 241–251. Association for Computational Linguistics. Douglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation schemes. Cambridge University Press. 2672 Lu Wang, Nick Beauchamp, Sarah Shugars, and Kechen Qin. 2017. Winning on the merits: The joint effects of content and style on debate outcomes. Transactions of the Association for Computational Linguistics, 5:219–232. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3174–3187, Brussels, Belgium. Association for Computational Linguistics. Toshihiko Yanase, Toshinori Miyoshi, Kohsuke Yanai, Misa Sato, Makoto Iwayama, Yoshiki Niwa, Paul Reisert, and Kentaro Inui. 2015. Learning sentence ordering for opinion generation of debate. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 94–103, Denver, CO. Association for Computational Linguistics. A Appendices A.1 Chunking Grammar for Keyhrase Extraction In order to construct keyphrase candidates, we compile a set of regular expressions based on the following grammar rules, and extract all matched NP and VP patterns as candidates. NP: {<DT|PP$>?<JJ|JJR>*<NN.*|CD|JJ>+} PP: {<IN><NP>} VP: {<MD>?<VB.*><NP|PP>} A.2 Stance Scoring Model Our stance scoring model calculates the score by aggregating the sentiment words surrounding the opinion targets. Here we choose the keyphrases of input statement as opinion targets, denoted as T. We then tally sentiment words, collected from Hu and Liu (2004), towards targets in T, with positive words counted as +1 and negative words as −1. Each score is discounted by d−5 τ,l , with dτ,l being the distance between the sentiment word l and the target τ ∈T. The stance score of a text psg (an input statement or a retrieved passage) towards opinion targets T is calculated as: Q(psg, T) = X τ∈T X l∈psg sgn(l) · d−5 τ,l (13) In our experiments, we only keep passages with a stance score of the opposite sign to that of the input statement, and with a magnitude greater than 5, i.e. |Q(psg, T)| > 5 (determined by manual inspection on training set). A.3 List of Discourse Markers As described in §5.2 in the main paper, we use a list of discourse markers together with topic signature words to label argumentative content sentences. The following list of discourse markers are manually selected from the Appendix B in Prasad et al. (2008). • Contrast: although, though, even though, by comparison, by contrast, in contrast, however, nevertheless, nonetheless, on the contrary, regardless, whereas • Restatement/Equivalence/Generalization: eventually, in short, in sum, on the whole, overall • Result: accordingly, as a result, as it turns out, consequently, finally, furthermore, hence, in fact, in other words, in short, in the end, in turn, therefore, thus, ultimately
2019
255
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2673–2679 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2673 A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation Feng Nie1∗Jin-Ge Yao2 Jinpeng Wang2 Rong Pan1 Chin-Yew Lin2 1Sun Yat-Sen University 2Microsoft Research Asia [email protected], [email protected] 2{jinge.yao, jinpwa, cyl}@microsoft.com Abstract Recent neural language generation systems often hallucinate contents (i.e., producing irrelevant or contradicted facts), especially when trained on loosely corresponding pairs of the input structure and text. To mitigate this issue, we propose to integrate a language understanding module for data refinement with selftraining iterations to effectively induce strong equivalence between the input data and the paired text. Experiments on the E2E challenge dataset show that our proposed framework can reduce more than 50% relative unaligned noise from the original data-text pairs. A vanilla sequence-to-sequence neural NLG model trained on the refined data has improved on content correctness compared with the current state-of-the-art ensemble generator. 1 Introduction Neural models for natural language generation (NLG) based on the encoder-decoder framework have become quite popular recently (Wen et al., 2015; Mei et al., 2016; Wiseman et al., 2017; Wen et al., 2017; Chisholm et al., 2017; Nie et al., 2018, inter alia). Albeit being appealing for producing fluent and diverse sentences, neural NLG models often suffer from a severe issue of content hallucination (Reiter, 2018a), which refers to the problem that the generated texts often contain information that is irrelevant to or contradicted with the input. Given that similar issues have been less reported or noticed in the latest neural machine translation systems, we believe that the origin of the issue for neural NLG comes from the data side. Current datasets used for training neural NLG systems often include instances that do not contain the same amount of information from the input structure and the output text (Perez-Beltrachini and Gardent, 2017). There is no exception for datasets ∗Contribution during internship at Microsoft. MR Name Rating Price Golden Palace 5 out of 5 Cheap Reference: Golden Palace is a restaurant specializing in breakfast in the low price range. Table 1: A loosely corresponded MR-text pair. Bolded phrases conforms to the MR, underlined words are domain-specific additional information, and italic values in the MR are not realised in the reference. originally intended for surface realisation (“how to say”) without focusing on content selection (“what to say”). Table 1 depicts an example, where the attribute Rating=5 out of 5 in the input meaning representation (MR) is not verbalised in a reference text written by human, while the word restaurant in the reference should refer to an attribute value EatType=Restaurant not contained in the MR. Without explicit alignments in between MRs and the corresponding utterances for guidance, neural systems trained on such data often produce unexpected errors. Previous work attempted at injecting indirect semantic control over the encoder-decoder architecture (Wen et al., 2015; Duˇsek and Jurcicek, 2016; Agarwal et al., 2018) or encouraging consistency during training (Chisholm et al., 2017), without essentially changing to the noisy training data. One exception is the Slug2Slug system (Juraska et al., 2018), where the authors use an aligner with manually written heuristic rules to filter out unrealized attributes from data. In this paper, we propose a simple, automatic recipe towards reducing hallucination for neural surface realisers by enhancing the semantic equivalence between pairs of MRs and utterances. The steps include: (1) Build a language understanding module (ideally well-calibrated) that tries to parse the MR from an utterance; (2) Use it to reconstruct the correct attribute values revealed in the reference texts; (3) With proper confidence thresh2674 olding, conduct self-training to iteratively recover data pairs with identical or equivalent semantics. Experiments on the E2E challenge benchmark (Novikova et al., 2017b) show that our framework can reduce more than 50% relative unaligned noise from original MR-text pairs, and a vanilla sequence-to-sequence model trained on the refined data can improve content correctness in both human and automatic evaluations, when compared with the current state-of-the-art neural ensemble system (Juraska et al., 2018). 2 Approach Our proposed framework consists of a neural natural language understanding (NLU) module with iterative data refinement to induce semantically equivalent MR-text pairs from a dataset containing a moderate level of noise. 2.1 Notation Formally, given a corpus with paired meaning representations and text descriptions {(R, X)}N i=1, the input MR R = (r1, . . . , rM) is a set of slotvalue pairs rj = (sj, vj), where each rj contains a slot sj (e.g., rating) and a value vj (e.g., 5 out of 5). The corpus has M pre-defined slots , and each slot sj has Kj unique categorical values vj ∈(cj,1, . . . , cj,Kj). The corresponding utterance X = (x1, . . . , xT ) is a sequence of words describing the MR. 2.2 Neural NLU Model As shown in Figure 1, the NLU model consists of a self-attentive encoder and an attentive scorer. Self-Attentive Encoder. The encoder produces the vector representations of slot-value pairs in MR and its paired utterance. A slot-value pair r can be treated as a short sequence W = (w1, . . . , wn) by concatenating words in its slot and value. The word sequence W is first represented as a sequence of word embedding vectors (v1, . . . , vn) from a pre-trained embedding matrix E, and then passed through a bidirectional LSTM layer to yield the contextualized representations U sv = (usv 1 , . . . , usv n ). To produce a summary context vector for U sv, we adopt the same selfattention structure in Zhong et al. (2018) to obtain the sentence vector cs, due to the effectiveness of self-attention modules over variable-length sequences. Similarly, we obtain the contextualized D Name = The Golden Palace Self-attention Attention Scoring 𝑃(𝑟|𝑋) Slot-value pair 𝑟 Output text 𝒖1 sv 𝒖2 𝑠𝑣 𝒖3 𝑠𝑣 𝒖4 𝑠𝑣 The Golden Palace is … 𝒖1 𝑜 𝒖2 𝑜 𝒖3 𝑜 𝒖4 𝑜 𝒄𝑠 𝒅 𝑤1 𝑤2 𝑤3 𝑤4 𝑥1 𝑥2 𝑥3 𝑥4 Utterance 𝑋 Figure 1: The structure of the neural NLU model. representations Uo = (uo 1, . . . , uo T ) for the utterance X. Attentive Scorer. The scorer calculates the semantic similarity between a slot-value pair r (e.g., Price=Cheap) and the utterance X (e.g., reference in Table 1). Firstly, an attention layer is applied to select the most salient words in X related to r, which yields the attentive representation d of utterance X. Given the sentence vector cs of the slot-value pair r and the attentive vector d of the utterance X, the normalized semantic similarity is defined as: p(r|X) = softmax(−||d −cs||2), where d = T X t=1 btuo t, with bt = softmax((uo t)T cs). (1) Model Inference. Each utterance X will be parsed to an MR Re = (re 1, . . . , re M), with each slot-value pair re j = (sj, vj) determined by selecting the candidate value vj with the maximum semantic similarity for each slot sj: vj = cj,k, k = arg max k p(re j = (sj, cj,k)|X), (2) where cj,k denotes the kth categorical value for jth slot. Since an utterance may not describe any information about a specific slot s, we add a NONE value as a candidate value of each slot. Model Training. The NLU model is optimized by minimizing the cross-entropy loss: L(θ) = − N X i M X j log p(ri,j|Xi; θ) (3) where θ denotes model parameters, and ri,j denotes the jth slot-value pair in the ith training MR. 2675 2.3 Iterative Data Refinement The performance of NLU can be inaccurate when trained on noisy data-text pairs. However, models trained on data with a moderate level of noise could still be well-calibrated. This could enable an iterative relabeling procedure, where we only take MRs produced by NLU with high confidence together with their utterances as new training MRtext pairs to bootstrap the NLU training. Algorithm 1 describes the training procedure. We first pre-train the NLU model using the original data-text pairs for Npre iterations. Then the NLU model parses relevant MR for every utterance in training data, which can be used as new training examples (Line 4). However, due to the inaccuracy of the NLU results, we only use a small portion (φ is set to 40% on validation) with high confidence. Moreover, as each MR consists of up to M slots with some of them being unreliable, we filter the slot-value pairs with slot probability below average according to slot confidence (Line 8 14). Finally, the NLU model is fine-tuned with the new training corpus De. This process is repeated for Ntune epochs. The final NLU model is leveraged to parse all utterances in the training corpus. The resulting MRs paired with original utterances form the refined training corpus for NLG. 3 Experiments 3.1 Setup Dataset. Our experiments are conducted on E2E challenge (Novikova et al., 2017b) dataset, which aims at verbalizing all information from the MR. It has 42,061, 4,672 and 4,693 MR-text pairs for training, validation and testing, respectively. Note that every input MR in this dataset has 8.65 different references on average. The test set has 630 unique input MRs. We examine the effectiveness of our proposed method in two aspects: 1) reducing the noise in data-text pairs (NLU), 2) reducing hallucinated contents in surface realisation (NLG). Automatic metrics. The well-crafted rule-based aligner built by Juraska et al. (2018)1 is adopted to approximately reflect the semantic correctness of NLU and NLG models. The error rate is calculated by matching the slot values in output utterance: Err = M N , where N is the total number 1 We use the public available evaluation script in https://github.com/jjuraska/slug2slug/blob/master/slot aligner /data analysis.py Algorithm 1 Iterative Data Refinement Require MR-text pairs D = {(R, X)}N 1 , confidence threshold φ, pre-training epochs Npre, tuning epochs Ntune, 1: Train θ with Eq. 3 on D for Npre iterations 2: for iter = 1 to Ntune do 3: Reset self-training corpus De = {} 4: Parse the MR Re i = (re i,1, . . . , re i,M) for every Xi using Eq. 2 5: Slot confid. pj = PN i=1 p(re i,j|Xi) for sj 6: MR confid. fi = PM j=1 p(re i,j|Xi) for Re i 7: Sort {(Re, X)}N 1 by MR confidence in reverse order 8: for i = 1 to ⌊φ · N⌋do 9: for j = 1 to M do 10: if p(re i,j|Xi) < pj/N then 11: Remove re i,j from Re i 12: end if 13: end for 14: De ←De ∪(Re i, Xi) 15: end for 16: Update θ with Eq. 3 on De 17: end for of MR-text pairs, and M is the number of wrong MR-text pairs which contain missing or conflict slots in the realization given its input MR. BLEU4 (Papineni et al., 2002) is also reported, although currently neither BLEU nor any other automatic metrics could be convincingly used for evaluating language generation (Novikova et al., 2017a; Chaganty et al., 2018; Reiter, 2018b, inter alia). Human Evaluation. We randomly sample 100 data-text pairs from test set and ask three crowd workers to manually annotate missed (M), added (A), and contradicted (C) slot values in NLG outputs with respect to the input MR, or exact match (E) if all slot values have been realized in the given utterance which contains no additional hallucinated information. When evaluating the NLU systems, missed and added slots refer to the opposite directions, respectively. Compared Systems. Systems in comparison: • TGen (Duˇsek et al., 2018): a sequence-tosequence (Seq2Seq) model with reranking. • Slug2Slug (Juraska et al., 2018): current state-of-the-art method on E2E challenge dataset. It is an ensemble model and uses a rule based aligner for data cleaning and reranking. 2676 • Seq2Seq: a basic Seq2Seq model trained on original MR-text pairs with the copy mechanism (Gu et al., 2016; See et al., 2017). • Seq2Seq+aug: Seq2Seq trained on the MRtext pairs reconstructed by pre-trained NLU. • Seq2Seq+aug+iter: Seq2Seq trained on the MR-text pairs reconstructed by NLU model with iterative data refinement algorithm. • Seq2Seq+aligner: Seq2Seq trained on the MR-text pairs produced by the rule based aligner (Juraska et al., 2018). Implementation Details. For all models, we use fixed pre-trained GloVe vectors (Pennington et al., 2014) and character embeddings (Hashimoto et al., 2017). The dimensions of trainable hidden units in LSTMs are all set to 400. The epochs for pre-training Npre and bootstrapping Ntune are all set to 5 on validation. During training, we regularize all layers with a dropout rate of 0.1. We use stochastic gradient descent (SGD) for optimisation with learning rate 0.1. The gradient is truncated by 5. For hyper-parameter φ, we conduct experiments with different values (φ = 0.2, 0.4, 0.6, 0.8, 1.0), details in Appendix A. 3.2 Main Results NLU Results. One challenge in E2E dataset is the need to account for the noise in the corpus as some of the MR-text pairs are not semantically equivalent due to the data collection process (Duˇsek et al., 2018). We examine the performance of the NLU module by comparing noise reduction of the reconstructed MR-text pairs with the original ones in both training and test sets. Table 2 shows the automatic results. Applying our NLU model with iterative data refinement, the error rates of refined MR-text pairs yields 23.33% absolute error reduction on test set. Human evaluation in Table 3 shows that our proposed method achieves 16.69% improvement on information equivalence between MR-text pairs. These results confirm the effectiveness of our method in reducing the unaligned data noise, and the large improvement (i.e, 15.09%) on exact match when applying self-training algorithm suggests the importance of iterative data refinement. NLG Results. Table 4 presents the automatic results of different neural NLG systems. We can see that Seq2Seq+aug+iter achieves comparable BLEU score as Slug2Slug but with 4.44% error reduction on content correctness over Train Err(%) Test Err(%) Original data 35.50 37.59 NLU refined data 16.31 14.26 w/o self-training 25.14 22.69 Table 2: Automatic evaluation results of different NLU models on both training and test sets E(%) M(%) A(%) C(%) Original data 71.93 0 24.13 3.95 NLU refined data 88.62 5.45 2.48 3.47 w/o self-training 73.53 13.23 8.33 4.91 Table 3: Human evaluation results for NLU on test set (inter-annotator agreement: Fleiss’ kappa = 0.855) BLEU(%) Err(%) TGen 65.90 18.09 (114/630) Slug2Slug 66.19 6.51 (41/630) Seq2Seq 66.15 69.37 (374/630) Seq2Seq+aug 66.49 28.89 (182/630) Seq2Seq+aug+iter 65.63 2.07 (13/630) Seq2Seq+aligner 63.81 1.75 (11/630) Table 4: Automatic metrics for NLG E(%) M(%) A(%) C(%) TGen 78.49 15.12 2.69 3.3 Slug2Slug 91.36 2.98 0 5.66 Seq2Seq 44.07 50.65 4.03 0.65 Seq2Seq+aug+iter 93.93 3.36 2.69 0 Table 5: Human evaluation results for NLG (interannotator agreement: Fleiss’ kappa = 0.832) Slug2Slug. Seq2Seq+aug+iter largely improves the content correctness over the baseline Seq2Seq with 67.3% error reduction. Besides, we also replace our NLU module with the rule based aligner crafted by Juraska et al. (2018) for data refinement to inspect the difference between our proposed method and manually designed rich heuristics. We can observe that these two models (Seq2Seq+aug+iter and Seq2Seq+aligner) achieve comparable performance, while our approach is fully automatic and requires no domain knowledge. The human evaluation results are shown in Table 5. We can find that Seq2Seq+aug+iter improves 2.59% accuracy on exact match over Slug2Slug. Specifically, Slug2Slug augments original training data by only deleting additional slot values not realized in the utterance with an aligner, which is not capable of the situation where the given utterance contains incorrect or additional slot values and leads more con2677 Utterance: Located in riverside, near Caf Sicilia, is the Phoenix, a French pub that is family-friendly and has average prices and an average rating. Original MR: name[The Phoenix], eatType[pub], food[French], priceRange[20-25], area[riverside], customer rating[3 out of 5], familyFriendly[no], near[Caf Sicilia] Refined MR: name[The Phoenix], eatType[pub], food[French], priceRange[moderate], area[riverside], customer rating[average], familyFriendly[yes], near[Caf Sicilia] Table 6: Example for data refinement; The underscored item is incorrect. MR Name:[The Mill]; EatType:[pub]; Food:[Fast Food];PriceRange:[high]; FaimilyFriendly:[yes];Near:[Caf Sicilia]; Area:[riverside]; Rating:[average] TGen The Mill is a high priced family friendly fast food pub located near Caf Sicilia in the riverside area. Slug2Slug children friendly pub in the riverside area near Caf Sicilia. It has a high price range and a high customer rating Seq2Seq The Mill is a family friendly pub located near Caf Sicilia. Seq2Seq+ aug+iter The Mill is a children friendly fast food pub near Caf Sicilia in the riverside area. It has a high price range and an average customer rating. Table 7: Examples of different system outputs. tradicted errors. Our method can complement and correct original MR with additional slot values described in the paired texts to effectively alleviate generating contradicted facts. However, due to the imperfection of NLU model, our method may ignore part of slot values realized in utterances and produce some additional errors. 3.3 Case Study Example for refined data. Table 6 depicts a case for one pair with originally inaccurate MR while being corrected by NLU module and iterative refinement. Our proposed method is capable of reducing the unaligned noise for original data. Example for NLG. Table 7 shows the sentences generated by different NLG systems. Seq2Seq without any semantic control tends to generate shorter descriptions. Slug2Slug and TGen with reranker to control the content coverage can generate more input information, but still misses one input information and Slug2Slug produces a contradicted fact (i.e., customer rating). Our proposed method Seq2Seq+aug+iter trained on refined MR-text pairs, verbalises all the input information correctly, which shows the importance of data quality in terms of strong equivalence between MR and utterance. 4 Discussion In this paper, we present a simple recipe to reduce the hallucination problem in neural language generation: introducing a language understanding module to implement confidence-based iterative data refinement. We find that our proposed method can effectively reduce the noise in the original MR-text pairs from the E2E dataset and improve the content coverage for standard neural surface realisation (no focus on content selection). However, the currently presented approach still has two clear limitations. One is that this simple approach is implicitly built on an assumption of a moderate level of noise in the original data, which makes it possible to bootstrap a well-calibrated NLU module. We are still on the way to find out solutions for cases with huge noise (PerezBeltrachini and Lapata, 2018; Wiseman et al., 2017), where heavy manual intervention or external knowledge should be desperately needed. The other limitation of this preliminary work is that it currently overlooks the challenges of lexical choices for quantities, degrees, temporal expressions, etc, which are rather difficult to learn merely from data and should require additional commonsense knowledge. An example case is in Table 6, where the original priceRange=20-25 is refined to be priceRange=moderate, which enhances the correspondence between the MR and the text but sidesteps the lexical choice for numbers which requires localised numerical commonsense. Additional modules for lexical choices should be expected for a refined system. 5 Acknowledgement We thank Zhirui Zhang, Shuangzhi Wu, and the anonymous reviewers for helpful comments. Feng Nie is partially supported by National Key R&D Program of China (2018YFB1004404) and Key R&D Program of Guangdong Province (2018B010107005). The contact author of this paper, according to the meaning given to this role by Sun Yat-Sen University, is Rong Pan. 2678 References Shubham Agarwal, Marc Dymetman, and Eric Gaussier. 2018. Char2char generation with reranking for the E2E NLG challenge. In Proceedings of the 11th International Conference on Natural Language Generation, pages 451–456, Tilburg University, The Netherlands. Association for Computational Linguistics. Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 643–653, Melbourne, Australia. Association for Computational Linguistics. Andrew Chisholm, Will Radford, and Ben Hachey. 2017. Learning to generate one-sentence biographies from Wikidata. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 633–642, Valencia, Spain. Association for Computational Linguistics. Ondˇrej Duˇsek and Filip Jurcicek. 2016. Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics. Ondˇrej Duˇsek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the E2E NLG challenge. In Proceedings of the 11th International Conference on Natural Language Generation, pages 322–328, Tilburg University, The Netherlands. Association for Computational Linguistics. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Association for Computational Linguistics. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923–1933, Copenhagen, Denmark. Association for Computational Linguistics. Juraj Juraska, Panagiotis Karagiannis, Kevin Bowden, and Marilyn Walker. 2018. A deep ensemble model with slot alignment for sequence-to-sequence natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 152–162, New Orleans, Louisiana. Association for Computational Linguistics. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? selective generation using LSTMs with coarse-to-fine alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 720–730, San Diego, California. Association for Computational Linguistics. Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong Pan, and Chin-Yew Lin. 2018. Operation-guided neural networks for high fidelity data-to-text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3879–3889, Brussels, Belgium. Association for Computational Linguistics. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017a. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics. Jekaterina Novikova, Ondˇrej Duˇsek, and Verena Rieser. 2017b. The E2E dataset: New challenges for endto-end generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 201–206, Saarbr¨ucken, Germany. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Laura Perez-Beltrachini and Claire Gardent. 2017. Analysing data-to-text generation benchmarks. In Proceedings of the 10th International Conference on Natural Language Generation, pages 238–242, Santiago de Compostela, Spain. Association for Computational Linguistics. Laura Perez-Beltrachini and Mirella Lapata. 2018. Bootstrapping generators from noisy data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1516–1527, New Orleans, Louisiana. Association for Computational Linguistics. Ehud Reiter. 2018a. Hallucination in neural NLG. 2679 Ehud Reiter. 2018b. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393– 401. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721, Lisbon, Portugal. Association for Computational Linguistics. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438–449, Valencia, Spain. Association for Computational Linguistics. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263, Copenhagen, Denmark. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1458– 1467, Melbourne, Australia. Association for Computational Linguistics. 0 20 40 60 80 100 (%) 5 10 15 20 25 30 Content Coverage Error Rate (%) Figure 2: The effect of hyperparameter φ for NLG content coverage performance. A Effect of φ on NLG model The parameter φ controls the proportion of relevant MRs produced by NLU model for iterative training. Figure 2 shows its influence for NLG on the content coverage measurement. The experimental result shows NLG models trained on data produced by self-training achieve error reduction in content coverage. As the NLU model can bring inaccurate instances when performing iterative data augmentation, controlling the proportion φ from 20% to 40% can yield better results compared to introducing all the MRs produced by NLU for self-training.
2019
256
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2680–2686 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2680 Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information Pengcheng Yang1,2∗, Zhihan Zhang2∗, Fuli Luo2, Lei Li2, Chengyang Huang3, Xu Sun1,2 1Deep Learning Lab, Beijing Institute of Big Data Research, Peking University 2MOE Key Lab of Computational Linguistics, School of EECS, Peking University 3Beijing University of Posts and Telecommunications {yang pc, zhangzhihan, luofuli, xusun}@pku.edu.cn [email protected], [email protected] Abstract Automatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms. Previous work focuses on automatic commenting based solely on textual content. However, in real-scenarios, online articles usually contain multiple modal contents. For instance, graphic news contains plenty of images in addition to text. Contents other than text are also vital because they are not only more attractive to the reader but also may provide critical information. To remedy this, we propose a new task: cross-model automatic commenting (CMAC), which aims to make comments by integrating multiple modal contents. We construct a largescale dataset for this task and explore several representative methods. Going a step further, an effective co-attention model is presented to capture the dependency between textual and visual information. Evaluation results show that our proposed model can achieve better performance than competitive baselines. 1 1 Introduction Comments of online articles can provide rich supplementary information, which reduces the difficulty of understanding the article and enhances interactions between users. Therefore, achieving automatic commenting is necessary since it can contribute to improving user experience and increasing the activeness of social media platforms. Due to the importance described above, some work (Qin et al., 2018; Lin et al., 2018; Ma et al., 2018) has explored this task. However, these efforts are all focus on automatic commenting based solely on textual content. In real-scenarios, online ∗Equal Contribution. 1The dataset and code are available at https:// github.com/lancopku/CMAC News Images News Title ᱕᜿⳾❦ኡ㾯зәṳ㣡ᜩӪ䞹(Spring is coming! Thousands of acres are filled with intoxicating peach blossoms in Shanxi.) News Body 䘁ᰕኡ㾯ᒣ励зәṳ㣡ㄎ⴨㔭᭮ˈ⑨Ӫ⊹䞹㣡ыѝˈቭᛵᝏਇ᱕ ཙⲴ≄᚟DŽ (Recently, thousands of acres of peach blossoms are in full bloom at Pinglu, Shanxi Province. Visitors are immersed in the beautiful flowers, enjoying the breath of spring.) Comments 1. ᥪ┲Ӟˈ⍱䘎ᘈ䘄ʽ (Beautiful flowers! I can’t move my eyes from them.) 2. ⋑ᴹ㔯㥹Ⲵ㺜ᢈˈṳ㣡ቁҶа⛩㖾ᝏDŽ(Peach blossoms seem to be a little less pretty without any green grass as background.) 3. 㔯㢢ཊ⛩ቡྭҶDŽ (It would be better if there is more greenness.) Figure 1: An example in the constructed dataset. Red words indicate the content that is not included in the text but depicted in the images. articles on social media usually contain multiple modal contents. Take graphic news as an example, it contains plenty of images in addition to text. Other contents except text are also vital to improving automatic commenting. These contents may contain some information that is critical for generating informative comments. In addition, compared to plain text, these contents of other modalities are more attractive to the reader, making it easily become the focus of comments. Toward filling this gap, we propose the task of cross-model automatic commenting (CMAC), which aims to generate comments by integrating information of multiple modalities. We construct a large-scale cross-model comments dataset, which consists of 24,134 graphic news. Each instance is composed of several news photos, news title, news body, and corresponding high-quality comments. Figure 1 visually shows a sample in the dataset. 2681 Since the comments depend on the contents of multiple modalities, how to integrate these multimodal information becomes the focus. In fact, there exist intrinsic interactions between these input multimodal information. Various modalities can benefit from each other to obtain better representations. For instance, in the graphic news, images can help to highlight the important words in the text, while text also contributes to focusing on key regions of images. Therefore, we present a coattention model so that the information of multiple modalities can mutually boost for better representations. Experiments show that our co-attention model can substantially outperform various baselines from different aspects. The main contributions of this work are summarized as follows: • We propose the task of cross-modal automatic commenting (CMAC) and construct a large-scale dataset. • We present a novel co-attention model, which aims at capturing intrinsic interactions between multiple modal contents. • The experiments show that our approach can achieve better performance than competitive baselines. With multiple modal information and co-attention, the generated comments are more diverse and informative. 2 Cross-Modal Comments Dataset We introduce our constructed cross-modal comments dataset from the following aspects. Data collecting We collect data from the photo channels of a popular Chinese news website called Netease News2. The crawled news cover various categories including entertainment, sports, and more. We tokenize all texts into words, using a python package Jieba3. To guarantee the quality of the comments, we reserve comments with the length between 5 to 30 words and remove useless symbols and dirty words. Besides, we filter out short articles with less than 10 words or 3 images in its content, while unpopular articles with less than 10 pieces of comments are also removed. Finally, we acquire a dataset with 24,134 pieces of news. Each instance contains the news title and its body, several images and a list of high-quality 2http://news.163.com/photo 3https://github.com/fxsjy/jieba Statistic Train Dev Test Total # News 19,162 3,521 1,451 24,134 # Comments 746,423 131,175 53,058 930,656 Avg. Images 5.81 5.78 5.81 5.80 Avg. Body 54.75 54.72 55.07 54.77 Avg. Comment 12.19 12.21 12.18 12.19 Table 1: Statistics of the dataset. # News and # Comments denote the total number of news and comments, respectively. Avg. Images is the average number of images per news. Avg. Body is the average number of words per body, and similar to Avg. Comment. Evaluation Flue. Rele. Info. Overall Score 9.2 6.7 6.4 7.6 Pearson 0.74 0.76 0.66 0.68 Table 2: Quality evaluation results of the testing set. Flue., Rele. and Info. denotes fluency, relevance, and informativeness, respectively. comments. On average, each news in the dataset contains about 39 human-written comments. Data Statistics The dataset is split according to the corresponding news. The comments from the same news will appear solely in the training or testing set to avoid overfitting. In more detail, we split the data into 19,162, 3,521 and 1,451 news in the training, development, and testing sets, respectively. The corresponding number of comments is 746,423, 131,175 and 53,058, respectively. The statistics of the final dataset are presented in Table 1 and Figure 2 shows the distribution of the lengths for comments in both wordlevel and character-level. Data Analysis High-quality testing set is necessary for faithful automatic evaluation. Therefore, we randomly selected 200 samples from the testing set for quality evaluation. Three annotators with linguistic background are required to score comments and readers can refer to Section 4.3 for the evaluation details. Table 2 shows the evaluation results. The average score for overall quality is 7.6, showing that the testing set is satisfactory. 3 Proposed Model Given the texts4 x and images v of an online article, the CMAC task aims to generate a reasonable and fluent comment y. Figure 3 presents the overview of our proposed model, which is elaborated on in detail as follows. 4We concatenate the title and body into a single sequence. 2682 Figure 2: The distribution of lengths for comments in terms of both word-level and character-level. 3.1 Textual Encoder and Visual Encoder The textual encoder aims to obtain representations of textual content x. We implement it as a GRU model (Cho et al., 2014), which computes the hidden representation of each word as follows: hx i = GRU  hx i−1, e(xi)  (1) where e(xi) refers to the embedding of the word xi. Finally, the textual representation matrix is denoted as Hx = {hx 1, · · · , hx |x|} ∈R|x|×d1, where |x| is the total number of textual representations and d1 is the dimension of hx i . We apply ResNet (He et al., 2016a) as visual encoder to obtain the visual representation5 hv i of the i-th image vi. The final visual representation matrix is denoted as Hv = {hv 1, · · · , hv |v|} ∈R|v|×d2, where |v| is the number of visual representations and d2 is the dimension of hv i . 3.2 Co-Attention Mechanism We use co-attention mechanism to capture the intrinsic interaction between visual content and textual content. The two modal information are connected by calculating the similarity matrix S ∈ R|v|×|x| between Hv and Hx. Formally, S = HvW(Hx)T (2) where W ∈Rd2×d1 is a trainable matrix and Sij denotes similarity between the i-th visual representation and the j-th textual representation. S is normalized row-wise to produce the vision-to-text attention weights Ax, and column-wise to produce the text-to-vision attention weights Av: Ax = softmax(S) ∈R|v|×|x| (3) Av = softmax(ST) ∈R|x|×|v| (4) where softmax(·) means row-wise normalization. Hence we can obtain the vision-aware textual rep5Multiple representations can be extracted from an image. ‡•‡– ۶௩ ۶௫ ݄ଵ ݄ଷ ݄ଶ ––‡–‹‘ ––‡–‹‘ ––‡–‹‘ ––‡–‹‘ ǥ ǥ ݒଵ ݒଶ ݒଷ ݒସ ݔଵ ݔଶ ݔଷ ሺۯ௩ሻ୘ ۯ௫ ෡۶௩ ෡۶௫ ݃௧ାଵ ݃௧ିଵ ݃௧ ݕ௧ାଵ ݕ௧ ݕ௧ିଵ Figure 3: The overview of our proposed model. resentations ˆHx ∈R|v|×d1 by a product of the attention weight Ax and textual representation Hx: ˆHx = AxHx (5) Similarly, the text-aware visual representations ˆHv ∈R|x|×d2 can be obtained by: ˆHv = AvHv (6) Since Hx and Hv mutually guide each other’s attention, these two sources of information can mutually boost for better representations. 3.3 Decoder The decoder aims to generate the desired comment y via another GRU model. Since there exists information from multiple modalities, we equip decoder with multiple attention mechanisms. The hidden state gt+1 of decoder at time-step t + 1 is computed as: gt+1 = GRU  gt, [e(yt); cx t ; cv t ; ˆcx t ; ˆcv t ]  (7) where semicolon represents vector concatenation, yt is the word generated at time-step t and cx t is obtained by attending to Hx with gt as query, cx t = A(gt, Hx) (8) where A refers to the attention mechanism. Readers can refer to Bahdanau et al. (2015) for the detailed approach. cv t , ˆcx t , and ˆcv t are obtained in a similar manner by replacing Hx in Eq. (8) with Hv, ˆHx, and ˆHv, respectively. Finally, the decoder samples a word yt+1 from the output probability distribution as follows: yt+1 ∼softmax(Ugt+1) (9) 2683 where U is a weight matrix. The model is trained by maximizing the log-likelihood of ground-truth y∗= (y∗ 1, · · · , y∗ n) and the loss function is: L = − n  t=1 log  p(y∗ t |y∗ <t, x, v)  (10) where y∗ <t denotes the sequence (y∗ 1, · · · , y∗ t−1). 3.4 Extension to Transformer We also extend our approach to Transformer (Vaswani et al., 2017). In detail, we adopt selfattention to implement the textual encoder. The representation of each word can be written as: hx i = SelfAtten(xi, x) (11) which means that the multi-head attention component attends to the text x with the query xi. We strongly recommend readers to refer to Vaswani et al. (2017) for the details of self-attention. The decoder is also implemented with selfattention mechanism. More specifically, the hidden state of decoder at time-step t is calculated as: gt = SelfAtten(yt, y, Hx, Hv, ˆHx, ˆHv) (12) Inside the decoder, there are five multi-head attention components, using yt as query to attend to y, Hx, Hv, ˆHx, and ˆHv, respectively. 4 Experiments 4.1 Settings The batch size is 64 and the vocabulary size is 15,000. The 512-dim embeddings are learned from scratch. The visual encoder is implemented as ResNet-152 (He et al., 2016a) pretrained on the ImageNet. For the Seq2Seq version of our approach, both textual encoder and decoder is a 2layer GRU with hidden size 512. For the transformer version, we set the hidden size of multihead attention to 512 and the hidden size of feedforward layer to 2,048. The number of heads is set to 8, while a transformer layer consists of 6 blocks. We use Adam optimizer (Kingma and Ba, 2015) with learning rate 10−3 and apply dropout (Srivastava et al., 2014) to avoid over-fitting. 4.2 Baselines We adopt the following competitive baselines: Seq2Seq: We implement a series of baselines based on Seq2Seq. S2S-V (Vinyals et al., 2015) Models BLEU-1 ROUGE-L DIST-1 DIST-2 S2S-V 6.1 7.8 1348 3293 S2S-T 6.3 8.1 1771 4285 S2S-VT 6.6 8.5 1929 4437 Our (S2S) 7.1 9.1 2279 4743 Trans-V 5.9 7.6 1336 3472 Trans-T 6.4 8.3 1772 4694 Trans-VT 6.8 8.6 1891 4739 Our (Trans) 7.7 9.4 2265 4941 Table 3: Automatic evaluations of our method and baselines. DIST-1 and DIST-2 are the number of distinct unigrams and bigrams, respectively. only encodes images via CNN as input. S2ST (Bahdanau et al., 2015) is the standard Seq2Seq that only encodes texts as input. S2S-VT (Venugopalan et al., 2015) adopts two encoders to encode images and texts respectively. Transformer: We replace the Seq2Seq in the above baselines with Transformer (Vaswani et al., 2017). The corresponding models are named Trans-V, Trans-T, and Trans-VT, respectively. 4.3 Evaluation Metrics We adopt two kinds of evaluation methods: automatic evaluation and human evaluation. Automatic evaluation: We use BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) to evaluate overlap between outputs and references. We also calculate the number of distinct n-grams (Li et al., 2016) in outputs to measure diversity. Human evaluation: Three annotators score the 200 outputs of different systems from 1 to 10. The evaluation criteria are as follows. Fluency measures whether the comment is fluent. Relevance evaluates the relevance between the output and the input. Informativeness measures the amount of useful information contained in the output. Overall is a comprehensive metric. For each metric, the average Pearson correlation coefficient is greater than 0.6, indicating that the human scores are highly consistent. 4.4 Experimental Results Table 3 and Table 4 show the results of automatic evaluation and human evaluation, respectively. We perform analysis from the following aspects. The effectiveness of co-attention Both Table 3 and Table 4 show that our model can substantially outperform competitive baselines in all metrics. 2684 Models Flue. Rele. Info. Overall S2S-V 3.1 2.8 2.5 3.2 S2S-T 4.5 4.6 3.7 4.7 S2S-VT 4.6 5.1 4.3 4.9 Our (S2S) 4.8 5.7 4.7 5.1 Trans-V 2.9 2.3 2.8 2.9 Trans-T 4.3 4.8 4.4 4.6 Trans-VT 4.7 4.6 4.7 5.1 Our (Trans) 4.9 5.9 5.0 5.2 Table 4: Results of human evaluation. Flue., Rele. and Info. denotes fluency, relevance, and informativeness, respectively. For instance, the Transformer version of our approach achieves a 13% relative improvement of BLEU-1 score over Trans-VT. This illustrates that our co-attention can contribute to generating highquality comments. The co-attention mechanism brings bidirectional interactions between visual information and textual information, so that two information sources can mutually boost for better representations, leading to improved performance. The universality of co-attention Results show that both the Seq2Seq and Transformer version of our approach can outperform various baselines based on the same architecture. This shows that our co-attention has excellent universality, which can be applied to various model architectures. The contribution of visual content According to Table 3 and Table 4, although the images contribute less to generating high-quality comments than texts, they still bring a positive impact on the generation. This illustrates that visual content contains additional useful information, which facilitates the generation of informative comments. Therefore, integrating multi-modal information is necessary for generating high-quality comments, which is also an important value of our work. 5 Related Work In summary, this paper is mainly related to the following two lines of work. Automatic article commenting. One similar task to CMAC is automatic article commenting. Qin et al. (2018) is the first to propose this task and constructs a large-scale dataset. Lin et al. (2018) proposes to retrieve information from usergenerated data to facilitate the generation of comments. Furthermore, Ma et al. (2018) introduces a retrieval-based unsupervised model to perform generation from unpaired data. However, different from the article commenting that only requires extracting textual information for generation, the CMAC task involves not only the modeling of textual features but also the understanding of visual images, which poses a greater challenge to the intelligent systems. Co-attention. We are also inspired by the related work of co-attention mechanism. Lu et al. (2016a) introduces a hierarchical co-attention model in visual question answering to jointly attend to images and questions. Xiong et al. (2017) proposes a dynamic co-attention network for the question answering task and Seo et al. (2017) presents a bi-directional attention network to acquire query-aware context representations in machine comprehension. Tay et al. (2018a) proposes a co-attention mechanism based on Hermitian products for asymmetrical text matching problems. Zhong et al. (2019) further presents a coarse-grain fine-grain co-attention network that combines information from evidence across multiple documents for question answering. In addition, the co-attention mechanism can also be applied to word sense disambiguation (Luo et al., 2018), recommended system (Tay et al., 2018b), and essay scoring (Zhang and Litman, 2018). 6 Conclusion In this paper, we propose the task of cross-modal automatic commenting, which aims at enabling the AI agent to make comments by integrating multiple modal contents. We construct a largescale dataset for this task and implement plenty of representative neural models. Furthermore, an effective co-attention model is presented to capture the intrinsic interaction between multiple modal contents. Experimental results show that our approach can substantially outperform various competitive baselines. Further analysis demonstrates that with multiple modal information and co-attention, the generated comments are more diverse and informative. Acknowledgement We thank the anonymous reviewers for their thoughtful comments. Xu Sun is the contact author of this paper. 2685 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, Conference Track Proceedings. David Chen and William B. Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, pages 190–200. Deli Chen, Shuming Ma, Pengcheng Yang, and Xu Sun. 2018. Identifying high-quality chinese news comments based on multi-target text matching model. arXiv preprint arXiv:1808.07191. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724– 1734. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, Conference Track Proceedings. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In Computer Vision ECCV 2014 - 13th European Conference, Proceedings, Part V, pages 740–755. Zhaojiang Lin, Genta Indra Winata, and Pascale Fung. 2018. Learning comment generation by leveraging user-generated data. arXiv preprint arXiv:1810.12264. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016a. Hierarchical question-image coattention for visual question answering. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, pages 289–297. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016b. Hierarchical question-image coattention for visual question answering. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, pages 289–297. Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. arXiv preprint arXiv:1905.10060. Fuli Luo, Tianyu Liu, Zexue He, Qiaolin Xia, Zhifang Sui, and Baobao Chang. 2018. Leveraging gloss knowledge in neural word sense disambiguation by hierarchical co-attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1402–1411. Shuming Ma, Lei Cui, Furu Wei, and Xu Sun. 2018. Unsupervised machine commenting with neural variational topic model. arXiv preprint arXiv:1809.04960. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, pages 1310–1318. Lianhui Qin, Lemao Liu, Wei Bi, Yan Wang, Xiaojiang Liu, Zhiting Hu, Hai Zhao, and Shuming Shi. 2018. Automatic article commenting: the task and dataset. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Volume 2: Short Papers, pages 151–156. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th International Conference on Learning Representations, Conference Track Proceedings. 2686 Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3104–3112. Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018a. Hermitian co-attention networks for text matching in asymmetrical domains. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pages 4425–4431. Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018b. Multi-pointer co-attention networks for recommendation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2309–2318. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pages 6000– 6010. Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond J. Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence - video to text. In 2015 IEEE International Conference on Computer Vision, pages 4534–4542. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3156– 3164. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In 5th International Conference on Learning Representations, Conference Track Proceedings. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. MSR-VTT: A large video description dataset for bridging video and language. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 5288–5296. Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. SGM: sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3915–3926. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67– 78. Haoran Zhang and Diane J. Litman. 2018. Co-attention based neural network for source-dependent essay scoring. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 399–409. Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun. 2018. Learning sentiment memories for sentiment modification without parallel data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1103–1108. Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. arXiv preprint arXiv:1901.00603.
2019
257
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2687–2693 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2687 A Working Memory Model for Task-oriented Dialog Response Generation Xiuyi Chen1,2,3, Jiaming Xu1,2∗and Bo Xu1,2,3,4 1Institute of Automation, Chinese Academy of Sciences (CASIA). Beijing, China 2Research Center for Brain-inspired Intelligence, CASIA 3University of Chinese Academy of Sciences 4Center for Excellence in Brain Science and Intelligence Technology, CAS. China {chenxiuyi2017,jiaming.xu,xubo}@ia.ac.cn Abstract Recently, to incorporate external Knowledge Base (KB) information, one form of world knowledge, several end-to-end task-oriented dialog systems have been proposed. These models, however, tend to confound the dialog history with KB tuples and simply store them into one memory. Inspired by the psychological studies on working memory, we propose a working memory model (WMM2Seq) for dialog response generation. Our WMM2Seq adopts a working memory to interact with two separated long-term memories, which are the episodic memory for memorizing dialog history and the semantic memory for storing KB tuples. The working memory consists of a central executive to attend to the aforementioned memories, and a short-term storage system to store the “activated” contents from the longterm memories. Furthermore, we introduce a context-sensitive perceptual process for the token representations of the dialog history, and then feed them into the episodic memory. Extensive experiments on two task-oriented dialog datasets demonstrate that our WMM2Seq significantly outperforms the state-of-the-art results in several evaluation metrics. 1 Introduction Task-oriented dialog systems, such as hotel booking or technical support service, help users to achieve specific goals with natural language. Compared with traditional pipeline solutions (Williams and Young, 2007; Young et al., 2013; Wen et al., 2017), end-to-end approaches recently gain much attention (Zhao et al., 2017; Eric and Manning, 2017a; Lei et al., 2018), because they directly map dialog history to the output responses and consequently reduce human effort for modular designs and hand-crafted state labels. To effectively incorporate KB information and perform knowledge∗Corresponding Author based reasoning, memory augmented models have been proposed (Bordes et al., 2017; Seo et al., 2017; Eric and Manning, 2017b; Madotto et al., 2018; Raghu et al., 2018; Reddy et al., 2019; Wu et al., 2019). Bordes et al. (2017) and Seo et al. (2017) attended to retrieval models, lacking the ability of generation, while others incorporated the memory (i.e. end-to-end memory networks, abbreviated as MemNNs, Sukhbaatar et al. (2015)) and copy mechanism (Gu et al., 2016) into a sequential generative architecture. However, most models tended to confound the dialog history with KB tuples and simply stored them into one memory. A shared memory forces the memory reader to reason over the two different types of data, which makes the task harder, especially when the memory is large. To explore this problem, Reddy et al. (2019) very recently proposed to separate memories for modeling dialog context and KB results. In this paper, we adopt working memory to interact with two longterm memories. Furthermore, compared to Reddy et al. (2019), we leverage the reasoning ability of MemNNs to instantiate the external memories. Our intuition comes from two aspects. First, psychologists tend to break down the long-term memory1 into episodic memory for events (e.g. visual and textual perceptual inputs) and semantic memory for facts (world knowledge, such as KB information) as not all memory of experiences is the same (Gazzaniga and Ivry, 2013). Second, a successful task-oriented dialog system needs more intelligence, and recent works suggest that a critical component of intelligence may be working memory (Sternberg and Sternberg, 2016). Hence, leveraging the knowledge from psychological studies (Baddeley and Hitch, 1974; Baddeley, 2000; Dosher, 2003), we explore working memory for the dialog response generation. Our contributions 1Here, the long-term memory is referred to declarative memory that we have conscious access to. 2688 are summarized as follows: Firstly, inspired by the psychological studies on working memory, we propose the WMM2Seq for dialog generation which separates the storage of dialog history and KB information by using the episodic and semantic memories and then leverages the working memory to interact with them. Secondly, we leverage two kinds of transformations (CNN and biGRU) to incorporate the context information for better token representations. This procedure can be seen as a part of perceptual processes before the episodic memory storage, and can alleviate the Out-Of-Vocabulary (OOV) problem. Finally, our WMM2Seq outperforms the existing methods on several evaluation metrics in two task-oriented dialog datasets and shows a better reasoning ability in the OOV situation. 2 Model Description Figure 1 illustrates the flow of our WMM2Seq for dialog response generation. WMM2Seq can be seen as an encoder-decoder model, where decoder is the Working Memory (WM) which could interact with two long-term memories (the episodic memory memorizing dialog history and semantic memory storing KB information). As MemNN is well-known for its multiple hop reasoning ability, we instantiate the encoder and the two memories with three different MemNNs (MemNN Encoder, E-MemNN and S-MemNN). Furthermore, we augment E-MemNN and S-MemNN with copy mechanism from where we need to copy tokens or entities. The encoder encodes the dialog history to obtain the high-level signal, a distributed intent vector. The WM consists of a Short-Term Storage system (STS) and a Central-EXE including an Attention Controller (Attn-Ctrl) and a rule-based word selection strategy. The Attn-Ctrl dynamically generates the attention control vector to query and reason over the two long memories and then stores three “activated” distributions into STS. Finally a generated token is selected from the STS under the word selection strategy at each decoder step. The symbols are defined in Table 1, and more details can be found in the supplementary material. We omit the subscript E or S2, following Madotto et al. (2018) to define each pointer index set: ptri = ( max(z) if ∃z s.t. yi = xbz nxb + 1 otherwise , (1) 2Note, all variables belonging to the episodic memory are with subscript E, and semantic memory are with subscript S. Symbol Definition xi or yi a token in the dialog history or system response $ a special token used as a sentinel (Madotto et al., 2018) X X = {x1, . . . , xn, $}, the dialog history Y Y = {y1, · · · , ym}, the expected response bi one KB tuple, actually the corresponding entity B B = {b1, · · · , bl, $}, the KB tuples P T RE = {ptrE,1, · · · , ptrE,m}, dialog pointer index set. P T RE supervised information for copying words in dialog history P T RS = {ptrS,1, · · · , ptrS,m}, KB pointer index set. P T RS supervised information for copying entities in KB tuples Table 1: Notation Table. where xbz ∈X or B is the dialog history or KB tuples according to the subscript (E or S) and nxb + 1 is the sentinel position index as nxb is equal to the dialog history length n or the number of KB triples l. The idea behind Eq. 1 is that we can obtain the positions of where to copy by matching the target text with the dialog history or KB information. Furthermore, we hope this provides the model with an accurate guidance of how to activate the two long-term memories. 2.1 MemNN Encoder Here, on the context of our task, we give a brief description of K-hop MemNN with adjacent weight tying and more details can be found in (Sukhbaatar et al., 2015). The memory of MemNN is represented by a set of trainable embedding matrices C = {C1, . . . , CK+1}. Given input tokens in the dialog history X, MemNN first writes them into memories by Eq. 2 and then uses a query to iteratively read from them with multi hops to reason about the required response by Eq. 3 and Eq. 4. For each hop k, we update the query by Eq. 5 and the initial query is a learnable vector as like Yang et al. (2016). The MemNN encoder finally outputs a user intent vector oK. Ak i = Ck(xi) (2) pk i = Softmax((qk)T Ak i ) (3) ok = X i pk i Ak+1 i (4) qk+1 = qk + ok (5) To incorporate the context information, we explore two context-aware transformation TRANS(·) by replacing Eq. 2 with Ak i = TRANS(Ck(xi)), which is defined as follows: hi = TRANS(φe(xi)) = CNN([φe(xi−2), . . . , φe(xi+2)]), (6) 2689 Resto#1 phone resto#1_phone Resto#1 R_cuisine french Resto#1 R_address resto#1_address Resto#1 R_location paris Resto#1 R_number six Resto#1 R_price moderate Resto#1 R_rating 6 … … Resto#N R_rating 3 K o q Encoder KB Tuples STS Working Memory ˆty U: hi S: hello what can I help you with today U: may I have a table in paris S: i’m on it S: any preference on a type of cuisine U: i love indian food … … S: api_call italian paris six moderate U: instead could it be with french food … … S: ok let me look into some options for you U: <SILENCE> S: api_call french paris six moderate U: <SILENCE> Dialog History S-MemNN tq GRU 1 ˆty  1 tq  Attn-Ctrl tq S ptr P  Central-EXE E-MemNN High-level Signal Gate K o Predicted Response … … 1ˆy 1 ˆty  ˆty … … what do you think Encoder Decoder tq , E t q vocab P E ptr P  Figure 1: The Working Memory (WM) interacts with two long-term memories to generate the response. or hi = TRANS(φe(xi)) = " ⇀ hi ↼ hi # = " −−−→ GRU(φe(xi), ⇀ hi−1) ←−−− GRU(φe(xi), ↼ hi+1) # , (7) where hi is the context-aware representation, and φe is a trainable embedding function. We combine MemNNs with TRANS(·) to alleviate the OOV problem when reasoning about memory contents. 2.2 Working Memory Decoder Inspired by the studies on the working memory, we design our decoder as an attentional control system for dialog generation which consists of the working memory and two long-term memories. As shown in Figure 1, we adopt the E-MemNN to memorize the dialog history X as described in Section 2.1, and then store KB tuples into the S-MemNN without TRANS(·). We also incorporate additional temporal information and speaker information into dialog utterances as (Madotto et al., 2018) and adopt a (subject, relation, object) representation of KB information as (Eric and Manning, 2017b). More details can be found in the supplementary material. Having written dialog history and KB tuples into E-MemNN and S-MemNN, we then use the WM to interact with them (to query and reason over them) to generate the response. At each decoder step, the Attn-Ctrl, instantiated as a GRU, dynamically generates the query vector qt as follows: qt = GRU(C1 E(ˆyt−1), qt−1). (8) Here, query qt is used to access E-MemNN activating the final query qE = oK E , vocabulary distribution Pvocab by Eq. 9 and copy distribution for dialog history PE·ptr. When querying S-MemNN, we consider the dialog history by using query q′ t = qE +qt and then obtain the copy distribution for KB entities PS·ptr. The two copy distributions are obtained by augmenting MemNNs with copy mechanism that is PE·ptr = pK E,t and PS·ptr = pK S,t. Pvocab(ˆyt) = Softmax(W1[qt; o1 E]). (9) Now, three distributions, Pvocab, PE·ptr and PS·ptr, are activated and moved into the STS, and then a proper word is generated from the activated distributions. We here use a rule-based word selection strategy by extending the sentinel idea in (Madotto et al., 2018), which is shown in Figure 1. If the expected word is not appearing either in the episodic memory or the semantic memory, the two copy pointers are trained to produce the sentinel token and our WMM2Seq generates the token from Pvocab; otherwise, the token is generated by copying from either the dialog history or KB tuples and this is done by comparing the two copy distributions. We always select the other distribution if one of the two distributions points to the sentinel or select to copy the token corresponding to the biggest probability of the two distributions. Hence, during the training stage, all the parameters are jointly learned by minimizing the sum of three standard cross-entropy losses with the corresponding targets (Y , PTRE and PTRS). 2690 Task Ptr-Unk Mem2Seq HyP-MN GLMP WMM2Seq+CNN WMM2Seq+biGRU WMM2Seq WMM2Seq+biGRU (H1) T1 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) T2 100 (100) 100 (100) 99.9 (99.8) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) T3 85.1 (19.0) 94.7 (62.1) 94.9 (63.2) 96.3 (75.6) 95.03 (63.6) 95.32 (68.2) 94.94 (63.9) 95.01 (64.6) T4 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) T5 99.4 (91.5) 97.9 (69.6) 97.7 (67) 99.2 (88.5) 98.49 (76.6) 99.34 (90.3) 97.95 (71.2) 99.26 (88.8) T1-OOV 92.5 (54.7) 94.0 (62.2) 100 (100) 100 (100) 100 (100) 100 (100) 91.28 (57.2) 100 (100) T2-OOV 83.2 (0) 86.5 (12.4) 100 (100) 100 (100) 100 (100) 100 (100) 83.28 (0) 100 (100) T3-OOV 82.9 (13.4) 90.3 (38.7) 95.6 (63.9) 95.5 (65.7) 94.87 (66.2) 94.64 (61.6) 94.54 (60.5) 94.80 (62.2) T4-OOV 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) T5-OOV 73.6 (0) 84.5 (2.3) 89.3 (9.7) 92.0 (21.7) 92.32 (24.3) 92.56 (24.3) 84.45 (3.6) 91.86 (22.0) Table 2: Per-response and per-dialog (in the parentheses) accuracy on bAbI dialogs. 3 Experiments We conduct experiments on the simulated bAbI Dialogue dataset (Bordes et al., 2017) and the Dialog State Tracking Challenge 2 (DSTC2) (Henderson et al., 2014). We actually adopt the refined version of DSTC2 from Bordes et al. (2017) and their statistics are given in the supplementary material. Our model is trained end-to-end using Adam optimizer (Kingma and Ba, 2014), and the responses are generated using greedy search without any rescoring techniques. The shared size of embedding and hidden units is selected from [64, 512] and the default hop K = 3 is used for all MemNNs. The learning rate is simply fixed to 0.001 and the dropout ratio is sampled from [0.1, 0.4]. Furthermore, we randomly mask some memory cells with the same dropout ratio to simulate the OOV situation for both episodic and semantic memories. The hyper-parameters for best models are given in the supplementary material. 3.1 Results and Analysis We use Per-response/dialog Accuracy (Bordes et al., 2017), BLEU (Papineni et al., 2002) and Entity F1 (Madotto et al., 2018) to compare the performance of different models. And the baseline models are Seq2Seq+Attn (Luong et al., 2015), Pointer to Unknown (Ptr-Unk, Gulcehre et al. (2016)), Mem2Seq (Madotto et al., 2018), Hierarchical Pointer Generator Memory Network (HyPMN, Raghu et al. (2018)) and Global-to-Local Memory Pointer (GLMP, Wu et al. (2019)). Automatic Evaluation: The results on the bAbI dialog dataset are given in Table 2. We can see that our model does much better on the OOV situation and is on par with the best results on T5. Moreover, our model can perfectly issue API calls (task 1), update API calls (task 2) and provide extra information (task 4). As task 5 is a combination of tasks 1-4, our best performance on T5-OOV exhibits the powerful reasoning ability to the unseen Ent. F1 BLEU Per-Resp.(Dial.) Seq2Seq 69.7 55.0 46.4 (1.5) Seq2Seq+Attn 67.1 56.6 46.0 (1.4) Seq2Seq+Copy 71.6 55.4 47.3 (1.3) Mem2Seq 75.3 55.3 45.0 (0.5) HyP-MN 73.9 55.4 46.4 (1.7) WMM2Seq+CNN 80.73 57.33 48.80 (1.61) WMM2Seq+biGRU 80.23 58.39 49.02 (1.25) WMM2Seq 75.45 56.81 45.25 (1.25) WMM2Seq+biGRU (H1) 78.87 58.57 48.81 (1.61) Table 3: Automatic Evaluation on DSTC2. dialog history and KB tuples. And this reasoning ability is also proved by the performance improvements on the DSTC2 dataset according to several metrics in Table 3. Especially, a significant improvement on entity F1 scores indicates that our model can choose the right entities and incorporate them into responses more naturally (with highest BLEU scores). Furthermore, there is no significant difference between the two kinds of the transformation TRANS(·). Ablation Study: To better understand the components used in our model, we report our ablation studies from three aspects. First, we remove the context-sensitive transformation TRANS(·) and then find significant performance degradation. This suggests that perceptual processes are a necessary step before storing perceptual information (the dialog history) into the episodic memory and it is important for the performance of working memory. Second, we find that WMM2Seq outperforms Mem2Seq, which uses a unified memory to store dialog history and KB information. We can safely conclude that the separation of context memory and KB memory benefits the performance, as WMM2Seq performs well with less parameters than Mem2Seq on task 5. Finally, we additionally analysis how the multi-hop attention mechanism helps by showing the performance differences between the hop K = 1 and the default hop K = 3. Though multi-hop attention strengthens the reasoning ability and improves the results, we find that the performance difference between the hops K = 1 and K = 3 is not so obvious as shown in 2691 Mem2Seq WMM2Seq Gold Appropriate 4.31 4.47 4.61 Humanlike 4.37 4.48 4.80 Table 4: Human Evaluation. (Madotto et al., 2018; Wu et al., 2019). Furthermore, our model performs well even with one hop, which we mainly attribute to the reasoning ability of working memory. The separation of memories and stacking S-MemNN on E-MemNN also help a lot, because the whole external memory, consisting of the episodic and semantic memories, can be seen as a multi-hop (two-level) structure (the first level is the episode memory and the second level is the semantic memory). Attention Visualization: As an intuitive way to show the model’s dynamics, attention weight visualization is also used to understand how the CentralEXE controls the access to the two long-term memories (E-MemNN and S-MemNN). Figure 2 shows the episodic and semantic memory attention vectors at the last hop for each generated token. Firstly, our model generates a different but still correct response as the customer wants a moderately priced restaurant in the west and does not care about the type of food. Secondly, the generated response has tokens from the vocabulary (e.g. “is” and “a”), dialog history (e.g. “west” and “food”) and KB information (e.g. “saint johns chop house” and “british”), indicating that our model learns to interact well with the two long-term memories by two sentinels. Human Evaluation: Following the methods in (Eric and Manning, 2017b; Wu et al., 2019), we report human evaluation of the generated responses in Table 4. We adopt Mem2Seq as the baseline for human evaluation considering its good performance and code release 3. First we randomly select 100 samples from the DSTC2 test set, then generate the corresponding responses using WMM2Seq and Mem2Seq, and finally ask two human subjects to judge the quality of the generated responses according to the appropriateness and humanlikeness on a scale from 1 to 5. As shown in Table 4, WMM2Seq outperforms Mem2Seq in both measures, which is coherent to the automatic evaluation. More details about human evaluation are reported in the supplementary material. 3We thank the authors for releasing their code at https://github.com/HLTCHKUST/Mem2Seq. Figure 2: Last hop semantic and episodic memory attention visualization from the DSTC2 dataset. 4 Conclusion We leverage the knowledge from the psychological studies and propose our WMM2Seq for dialog response generation. First, the storage separation of the dialog history and KB information is very important and we explore two context-sensitive perceptual processes for the word-level representations of the dialog history. Second, working memory is adopted to interact with the long-term memories and then generate the responses. Finally, the improved performance on two task-oriented datasets demonstrates the contributions from the separated storage and the reasoning ability of working memory. Our future work will focus on how to transfer the long-term memory across different tasks. Acknowledgments We would like to thank the anonymous reviewers for their insightful comments. This work was supported by the National Natural Science Foundation of China (61602479), the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB32070000) and the Beijing Brain Science Project (Z181100001518006). 2692 References Alan Baddeley. 2000. The episodic buffer: a new component of working memory? Trends in Cognitive Sciences, 4(11):417 – 423. Alan D Baddeley and Graham Hitch. 1974. Working memory. In Psychology of learning and motivation, volume 8, pages 47–89. Elsevier. Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In International Conference on Learning Representations. Dosher. 2003. Working memory. In L. Nadel (Ed.), Encyclopedia of cognitive science, volume 4, pages 569–577. London: Nature Publishing Group. Mihail Eric and Christopher Manning. 2017a. A copyaugmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 468–473. Mihail Eric and Christopher D Manning. 2017b. Keyvalue retrieval networks for task-oriented dialogue. Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49. Michael Gazzaniga and Richard B Ivry. 2013. Cognitive Neuroscience: The Biology of the Mind: Fourth International Student Edition. WW Norton. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Association for Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 140–149. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1437–1447. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1468–1478. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Dinesh Raghu, Nikhil Gupta, and Mausam. 2018. Hierarchical pointer memory network for task oriented dialogue. arXiv preprint arXiv:1805.01216. Revanth Reddy, Danish Contractor, Dinesh Raghu, and Sachindra Joshi. 2019. Multi-level memory for task oriented dialogs. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Query-reduction networks for question answering. In International Conference on Learning Representations. Robert J Sternberg and Karin Sternberg. 2016. Cognitive psychology. Nelson Education. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 438–449. Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393–422. 2693 Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer networks for task-oriented dialogue. International Conference on Learning Representations. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Tiancheng Zhao, Allen Lu, Kyusong Lee, and Maxine Eskenazi. 2017. Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue.
2019
258
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2694–2703 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2694 Cognitive Graph for Multi-Hop Reading Comprehension at Scale Ming Ding†, Chang Zhou‡, Qibin Chen†, Hongxia Yang‡, Jie Tang† †Department of Computer Science and Technology, Tsinghua University ‡DAMO Academy, Alibaba Group {dm18,chen-qb15}@mails.tsinghua.edu.cn {ericzhou.zc,yang.yhx}@alibaba-inc.com [email protected] Abstract We propose a new CogQA framework for multi-hop question answering in web-scale documents. Founded on the dual process theory in cognitive science, the framework gradually builds a cognitive graph in an iterative process by coordinating an implicit extraction module (System 1) and an explicit reasoning module (System 2). While giving accurate answers, our framework further provides explainable reasoning paths. Specifically, our implementation1 based on BERT and graph neural network (GNN) efficiently handles millions of documents for multi-hop reasoning questions in the HotpotQA fullwiki dataset, achieving a winning joint F1 score of 34.9 on the leaderboard, compared to 23.6 of the best competitor.2 1 Introduction Deep learning models have made significant strides in machine reading comprehension and even outperformed human on single paragraph question answering (QA) benchmarks including SQuAD (Wang et al., 2018b; Devlin et al., 2018; Rajpurkar et al., 2016). However, to cross the chasm of reading comprehension ability between machine and human, three main challenges lie ahead: 1) Reasoning ability. As revealed by adversarial tests (Jia and Liang, 2017), models for single paragraph QA tend to seek answers in sentences matched by the question, which does not involve complex reasoning. Therefore, multi-hop QA becomes the next frontier to conquer (Yang et al., 2018). 2) Explainability. Explicit reasoning paths, which enable verification of logical rigor, are vital for the reliability of QA systems. HotpotQA (Yang et al., 2018) requires models to provide supporting sentences, which 1Codes: https://github.com/THUDM/CogQA 2https://hotpotqa.github.io, March 4, 2019 4XDOLW\&DIH MD]]FOXE /RV$QJHOHV 4XDOLW\&DIH GLQHU 2OG6FKRRO ƉOP *RQHLQ6HFRQGV 4XHVWLRQ:KRLVWKHGLUHFWRURIWKHƉOPZKLFKKDVVFHQHV LQLWƉOPHGDWWKH4XDOLW\&DIHLQ/RV$QJHOHV" 4XDOLW\&DIHZDVD KLVWRULFDO UHVWDXUDQWDQG MD]]FOXEŏ ORFDWLRQIHDWXUHGLQDQXPEHURI +ROO\ZRRGƉOPVLQFOXGLQJ2OG 6FKRROŊʼn*RQHLQ 6HFRQGVŊŏ 2OG6FKRROLVD $PHULFDQFRPHG\ƉOPŏ GLUHFWHGE\ 7RGG3KLOOLSV *RQHLQ6HFRQGVLVD $PHULFDQDFWLRQKHLVW ƉOPŏ GLUHFWHGE\'RPLQLF6HQD /RV$QJHOHV RƋFLDOO\WKH&LW\ RI/RV$QJHOHV DQGRIWHQNQRZQ E\LWVLQLWLDOV /$ 7RGG 3KLOOLSV 'RPLQLF 6HQD KRS KRS FRUUHFW DQVZHU KRS Figure 1: An example of cognitive graph for multi-hop QA. Each hop node corresponds to an entity (e.g., “Los Angeles”) followed by its introductory paragraph. The circles mean ans nodes, answer candidates to the question. Cognitive graph mimics human reasoning process. Edges are built when calling an entity to “mind”. The solid black edges are the correct reasoning path. means unordered and sentence-level explainability, yet humans can interpret answers with step by step solutions, indicating an ordered and entitylevel explainability. 3) Scalability. For any practically useful QA system, scalability is indispensable. Existing QA systems based on machine comprehension generally follow retrievalextraction framework in DrQA (Chen et al., 2017), reducing the scope of sources to a few paragraphs by pre-retrieval. This framework is a simple compromise between single paragraph QA and scalable information retrieval, compared to human’s ability to breeze through reasoning with knowledge in massive-capacity memory (Wang et al., 2003). Therefore, insights on the solutions to these challenges can be drawn from the cognitive process of humans. Dual process theory (Evans, 1984, 2003, 2008; Sloman, 1996) suggests that our brains first retrieve relevant information following attention via an implicit, unconscious and intu2695 itive process called System 1, based on which another explicit, conscious and controllable reasoning process, System 2, is then conducted. System 1 could provide resources according to requests, while System 2 enables diving deeper into relational information by performing sequential thinking in the working memory, which is slower but with human-unique rationality (Baddeley, 1992). For complex reasoning, the two systems are coordinated to perform fast and slow thinking (Kahneman and Egan, 2011) iteratively. In this paper, we propose a framework, namely Cognitive Graph QA (CogQA), contributing to tackling all challenges above. Inspired by the dual process theory, the framework comprises functionally different System 1 and 2 modules. System 1 extracts question-relevant entities and answer candidates from paragraphs and encodes their semantic information. Extracted entities are organized as a cognitive graph (Figure 1), which resembles the working memory. System 2 then conducts the reasoning procedure over the graph, and collects clues to guide System 1 to better extract next-hop entities. The above process is iterated until all possible answers are found, and then the final answer is chosen based on reasoning results from System 2. An efficient implementation based on BERT (Devlin et al., 2018) and graph neural network (GNN) (Battaglia et al., 2018) is introduced. Our contributions are as follows: • We propose the novel CogQA framework for multi-hop reading comprehension QA at scale according to human cognition. • We show that the cognitive graph structure in our framework offers ordered and entitylevel explainability and suits for relational reasoning. • Our implementation based on BERT and GNN surpasses previous works and other competitors substantially on all the metrics. 2 Cognitive Graph QA Framework Reasoning ability of humankind depends critically on relational structures of information. Intuitively, we adopt a directed graph structure for step-bystep deduction and exploration in cognitive process of multi-hop QA. In our reading comprehension setting, each node in this cognitive graph G corresponds with an entity or possible answer x, also interchangeably denoted as node x. The extraction module System 1, reads the introductory Algorithm 1: Cognitive Graph QA Input: System 1 model S1, System 2 model S2, Question Q, Predictor F,Wiki Database W 1 Initialize cognitive graph G with entities mentioned in Q and mark them frontier nodes 2 repeat 3 pop a node x from frontier nodes 4 collect clues[x, G] from predecessor nodes of x // eg. clues can be sentences where x is mentioned 5 fetch para[x] in W if any 6 generate sem[x, Q, clues] with S1 // initial X[x] 7 if x is a hop node then 8 find hop and answer spans in para[x] with S1 9 for y in hop spans do 10 if y /∈G and y ∈W then 11 create a new hop node for y 12 if y ∈G and edge(x, y) /∈G then 13 add edge (x, y) to G 14 mark node y as a frontier node 15 end 16 for y in answer spans do 17 add new answer node y and edge (x, y) to G 18 end 19 end 20 update hidden representation X with S2 21 until there is no frontier node in G or G is large enough; 22 Return arg max answer node x F(X[x]) paragraph para[x] of entity x and extracts answer candidates and useful next-hop entities from the paragraph. G is then expanded with these new nodes, providing explicit structure for the reasoning module, System 2. In this paper, we assume that System 2 conducts deep learning based instead of rule-based reasoning by computing hidden representations X of nodes. Thus System 1 is also required to summarize para[x] into a semantic vector as initial hidden representation when extracting spans. Then System 2 updates X based on graph structure as reasoning results for downstream prediction. Explainability is enjoyed owing to explicit reasoning paths in the cognitive graph. Besides simple paths, the cognitive graph can also clearly display joint or loopy reasoning processes, where new predecessors might bring new clues about the answer. Clues in our framework is a form-flexible concept, referring to information from predecessors for guiding System 1 to better extract spans. Apart from newly added nodes, those nodes with new incoming edges also need revisits due to new clues. We refer to both of them as frontier nodes. Scalability means that the time consumption of QA will not grow significantly along with the number of paragraphs. Our framework can scale in nature since the only operation referred to all 2696 Ques <latexit sha1_base64="1EWxMOFjDRT37bRb7ZlgcYDiw9o=">AB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cW7Ae0oWy2k3bpZhN2N0IJ/Qte PCji1T/kzX/jps1Bqw8GHu/NMDMvSATXxnW/nNLa+sbmVnm7srO7t39QPTzq6DhVDNsFrHqBVSj4BLbhuBvUQhjQKB3WB6l/vdR1Sax/LBzBL0IzqWPOSMmlxqpaiH1Zpbdxcgf4lXkBoUaA6rn4NRzNIpWGCat3MT4GVWGM4HzyiDVmFA2pWPsWyphNrPFrfOyZlVRiSMlS1pyEL9OZHRSOtZFNjOiJqJXvVy8T+ vn5rwxs+4TFKDki0XhakgJib542TEFTIjZpZQpri9lbAJVZQZG0/FhuCtvyXdC7q3mXdbV3VGrdFHGU4gVM4Bw+uoQH30IQ2MJjAE7zAqxM5z86b875sLTnFzDH8gvPxDRWcjkI=</latexit> System 1 (BERT) E[CLS] <latexit sha1_base64="ETVqKsXMWfH5TnGJ4hk R7zGnZE=">AB8HicbVBNSwMxEJ2tX7V+VT16CRbBU9kVQY/FInjwUNF+yHYp2TbhibZJckKZemv8OJB Ea/+HG/+G9N2D9r6YODx3gwz8KEM21c9sprKyurW8UN0tb2zu7e+X9g5aOU0Vok8Q8Vp0Qa8qZpE3DK edRFEsQk7b4ag+9dtPVGkWywczTmg8ECyiBFsrPR43cv8+u19MOmVK27VnQEtEy8nFcjR6JW/uv2YpIJK QzjW2vfcxAQZVoYRTielbqpgskID6hvqcSC6iCbHTxBJ1bpoyhWtqRBM/X3RIaF1mMR2k6BzVAvelPxP89 PTXQZEwmqaGSzBdFKUcmRtPvUZ8pSgwfW4KJYvZWRIZYWJsRiUbgrf48jJpnVU9t+rdnVdqV3kcRTiCY zgFDy6gBjfQgCYQEPAMr/DmKOfFeXc+5q0FJ585hD9wPn8AQkCQCg=</latexit> <latexit sha1_base64="ETVqKsXMWfH5TnGJ4hk R7zGnZE=">AB8HicbVBNSwMxEJ2tX7V+VT16CRbBU9kVQY/FInjwUNF+yHYp2TbhibZJckKZemv8OJB Ea/+HG/+G9N2D9r6YODx3gwz8KEM21c9sprKyurW8UN0tb2zu7e+X9g5aOU0Vok8Q8Vp0Qa8qZpE3DK edRFEsQk7b4ag+9dtPVGkWywczTmg8ECyiBFsrPR43cv8+u19MOmVK27VnQEtEy8nFcjR6JW/uv2YpIJK QzjW2vfcxAQZVoYRTielbqpgskID6hvqcSC6iCbHTxBJ1bpoyhWtqRBM/X3RIaF1mMR2k6BzVAvelPxP89 PTXQZEwmqaGSzBdFKUcmRtPvUZ8pSgwfW4KJYvZWRIZYWJsRiUbgrf48jJpnVU9t+rdnVdqV3kcRTiCY zgFDy6gBjfQgCYQEPAMr/DmKOfFeXc+5q0FJ585hD9wPn8AQkCQCg=</latexit> <latexit sha1_base64="ETVqKsXMWfH5TnGJ4hk R7zGnZE=">AB8HicbVBNSwMxEJ2tX7V+VT16CRbBU9kVQY/FInjwUNF+yHYp2TbhibZJckKZemv8OJB Ea/+HG/+G9N2D9r6YODx3gwz8KEM21c9sprKyurW8UN0tb2zu7e+X9g5aOU0Vok8Q8Vp0Qa8qZpE3DK edRFEsQk7b4ag+9dtPVGkWywczTmg8ECyiBFsrPR43cv8+u19MOmVK27VnQEtEy8nFcjR6JW/uv2YpIJK QzjW2vfcxAQZVoYRTielbqpgskID6hvqcSC6iCbHTxBJ1bpoyhWtqRBM/X3RIaF1mMR2k6BzVAvelPxP89 PTXQZEwmqaGSzBdFKUcmRtPvUZ8pSgwfW4KJYvZWRIZYWJsRiUbgrf48jJpnVU9t+rdnVdqV3kcRTiCY zgFDy6gBjfQgCYQEPAMr/DmKOfFeXc+5q0FJ585hD9wPn8AQkCQCg=</latexit> <latexit sha1_base64="ETVqKsXMWfH5TnGJ4hk R7zGnZE=">AB8HicbVBNSwMxEJ2tX7V+VT16CRbBU9kVQY/FInjwUNF+yHYp2TbhibZJckKZemv8OJB Ea/+HG/+G9N2D9r6YODx3gwz8KEM21c9sprKyurW8UN0tb2zu7e+X9g5aOU0Vok8Q8Vp0Qa8qZpE3DK edRFEsQk7b4ag+9dtPVGkWywczTmg8ECyiBFsrPR43cv8+u19MOmVK27VnQEtEy8nFcjR6JW/uv2YpIJK QzjW2vfcxAQZVoYRTielbqpgskID6hvqcSC6iCbHTxBJ1bpoyhWtqRBM/X3RIaF1mMR2k6BzVAvelPxP89 PTXQZEwmqaGSzBdFKUcmRtPvUZ8pSgwfW4KJYvZWRIZYWJsRiUbgrf48jJpnVU9t+rdnVdqV3kcRTiCY zgFDy6gBjfQgCYQEPAMr/DmKOfFeXc+5q0FJ585hD9wPn8AQkCQCg=</latexit> E1 <latexit sha1_base64="Bi34J8SYWq1KLtBcT2QBNyIgAIM=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBb BU0lEqMeiCB4r2g9oQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUMnGqGW+yWMa6E1DpVC8iQIl7ySa0yiQvB2Mb2Z+4lrI2L1iJOE+xEdKhEKRtFKD7d 9r1+uFV3DrJKvJxUIEejX/7qDWKWRlwhk9SYrucm6GdUo2CST0u91PCEsjEd8q6likbc+Nn81Ck5s8qAhLG2pZDM1d8TGY2MmUSB7YwojsyNxP/87ophld+JlSIldsShMJcGYzP4mA6E5QzmxhDIt7K2EjaimDG06 JRuCt/zyKmldVD236t1fVurXeRxFOIFTOAcPalCHO2hAExgM4Rle4c2Rzovz7nwsWgtOPnMf+B8/gC9741t</latexit> <latexit sha1_base64="Bi34J8SYWq1KLtBcT2QBNyIgAIM=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBb BU0lEqMeiCB4r2g9oQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUMnGqGW+yWMa6E1DpVC8iQIl7ySa0yiQvB2Mb2Z+4lrI2L1iJOE+xEdKhEKRtFKD7d 9r1+uFV3DrJKvJxUIEejX/7qDWKWRlwhk9SYrucm6GdUo2CST0u91PCEsjEd8q6likbc+Nn81Ck5s8qAhLG2pZDM1d8TGY2MmUSB7YwojsyNxP/87ophld+JlSIldsShMJcGYzP4mA6E5QzmxhDIt7K2EjaimDG06 JRuCt/zyKmldVD236t1fVurXeRxFOIFTOAcPalCHO2hAExgM4Rle4c2Rzovz7nwsWgtOPnMf+B8/gC9741t</latexit> <latexit sha1_base64="Bi34J8SYWq1KLtBcT2QBNyIgAIM=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBb BU0lEqMeiCB4r2g9oQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUMnGqGW+yWMa6E1DpVC8iQIl7ySa0yiQvB2Mb2Z+4lrI2L1iJOE+xEdKhEKRtFKD7d 9r1+uFV3DrJKvJxUIEejX/7qDWKWRlwhk9SYrucm6GdUo2CST0u91PCEsjEd8q6likbc+Nn81Ck5s8qAhLG2pZDM1d8TGY2MmUSB7YwojsyNxP/87ophld+JlSIldsShMJcGYzP4mA6E5QzmxhDIt7K2EjaimDG06 JRuCt/zyKmldVD236t1fVurXeRxFOIFTOAcPalCHO2hAExgM4Rle4c2Rzovz7nwsWgtOPnMf+B8/gC9741t</latexit> <latexit sha1_base64="Bi34J8SYWq1KLtBcT2QBNyIgAIM=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBb BU0lEqMeiCB4r2g9oQ9lsN+3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUMnGqGW+yWMa6E1DpVC8iQIl7ySa0yiQvB2Mb2Z+4lrI2L1iJOE+xEdKhEKRtFKD7d 9r1+uFV3DrJKvJxUIEejX/7qDWKWRlwhk9SYrucm6GdUo2CST0u91PCEsjEd8q6likbc+Nn81Ck5s8qAhLG2pZDM1d8TGY2MmUSB7YwojsyNxP/87ophld+JlSIldsShMJcGYzP4mA6E5QzmxhDIt7K2EjaimDG06 JRuCt/zyKmldVD236t1fVurXeRxFOIFTOAcPalCHO2hAExgM4Rle4c2Rzovz7nwsWgtOPnMf+B8/gC9741t</latexit> E[SEP ] <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> … EN <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> E0 1 <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> T0 <latexit sha1_base64="X93JYNB4Gt2WCA50tQLV i297OSU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8eK/YI2lM120i7dbMLuRihP8GLB0W8 +ou8+W/ctjlo64OBx3szMwLEsG1cd1vp7C2vrG5Vdwu7ezu7R+UD49aOk4VwyaLRaw6AdUouMSm4UZgJ1 FIo0BgOxjfzfz2EyrNY9kwkwT9iA4lDzmjxkqPjb7bL1fcqjsHWSVeTiqQo94vf/UGMUsjlIYJqnXcxPj Z1QZzgROS71UY0LZmA6xa6mkEWo/m586JWdWGZAwVrakIXP190RGI60nUWA7I2pGetmbif953dSEN37GZI alGyxKEwFMTGZ/U0GXCEzYmIJZYrbWwkbUWZsemUbAje8surpHVR9S6r7sNVpXabx1GEziFc/DgGmpwD 3VoAoMhPMrvDnCeXHenY9Fa8HJZ47hD5zPH9PvjX0=</latexit> T1 <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> T[SEP ] <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> TN <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> T 0 i <latexit sha1_base64="ycagfIhPAcq9SuB1/HItSluEld4=">AB63icbVBNS8NAEJ3Ur1q/qh69LBbRU0lU0GPRi8cK/YI2lM120y7d3YTdjVBC/4IXD4p49Q9589+4SXPQ1gcDj/dmJkXxJxp47rfTmltfWNzq 7xd2dnd2z+oHh51dJQoQtsk4pHqBVhTziRtG2Y47cWKYhFw2g2m95nfaJKs0i2zCymvsBjyUJGsMmk1pCdD6s1t+7mQKvEK0gNCjSH1a/BKCKJoNIQjrXue25s/BQrwin8og0TGZIrHtG+pxIJqP81vnaMzq4xQGClb0qBc/T2RYqH1TAS2U2Az0cteJv7n9RMT3vopk3FiqCSLRWHCkYlQ9jgaMUWJ4TNLMFHM3orIBCtMjI2nYkPwl9eJZ3LundVdx+va427Io4ynMApXIAHN9CAB2hCGwhM4Ble4c0Rzovz7nwsWktOMXMf+B8/gCLgo 3n</latexit> T 0 1 <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> … … [CLS] Tok1 <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> [SEP] <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> … TokN <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> Tok0 1 <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> … Question + clues[x,G] Paragraph[x] Hop span x <latexit sha1_base64="T81e0FN4eiLN0l7csieDRUgh6Jc=">AB6HicbVBNS8NAEJ34WetX1aOXxSJ 4KokKeix68diC/YA2lM120q7dbMLuRiyhv8CLB0W8+pO8+W/ctjlo64OBx3szMwLEsG1cd1vZ2V1bX1js7BV3N7Z3dsvHRw2dZwqhg0Wi1i1A6pRcIkNw43AdqKQRoHAVjC6nfqtR1Sax/LejBP0IzqQPOSMGivVn3q lsltxZyDLxMtJGXLUeqWvbj9maYTSMEG17nhuYvyMKsOZwEmxm2pMKBvRAXYslTRC7WezQyfk1Cp9EsbKljRkpv6eyGik9TgKbGdEzVAvelPxP6+TmvDaz7hMUoOSzReFqSAmJtOvSZ8rZEaMLaFMcXsrYUOqKDM2m6IN wVt8eZk0zyveRcWtX5arN3kcBTiGEzgD6gCndQgwYwQHiGV3hzHpwX5935mLeuOPnMEfyB8/kD5uOM/g=</latexit> Prev2 <latexit sha1_base64="NHajn1S7d4tKGHbUVsWnUGCMXZ0=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRV0GPRi8cKthbaUDbSbt2sxt2N4US+h+8eFDEq/Hm/ GbZuDtj4YeLw3w8y8MOFMG8/7dgpr6xubW8Xt0s7u3v5B+fCopWqKDap5FK1Q6KRM4FNwzHdqKQxCHx3B0O/Mfx6g0k+LBTBIMYjIQLGKUGCu1GgrHvVqvXPGq3hzuKvFzUoEcjV75q9uXNI1RGMqJ1h3fS0yQEWUY5TgtdVONCaEjMsCOpYLEqINsfu3UPbNK342ksiWMO1d/T2Qk1noSh7YzJmaol72Z+J/XSU10HWRMJKlBQReLopS7Rrqz190+ U0gNn1hCqGL2VpcOiSLU2IBKNgR/+eV0qpV/Yuqd39Zqd/kcRThBE7hHy4gjrcQOaQOEJnuEV3hzpvDjvzseiteDkM8fwB87nDz1RjuY=</latexit> Next <latexit sha1_base64="/04fUx5CbtNJGNPyUDBDQPloL60=">AB63icbVBNS8NAEJ3Ur1q/qh69BIvgqSQq6LHoxZNUsB/QhrLZTtulu5uwuxFL6F/w4kERr/4hb/4b N20O2vpg4PHeDPzwpgzbTzv2ymsrK6tbxQ3S1vbO7t75f2Dpo4SRbFBIx6pdkg0ciaxYZjh2I4VEhFybIXjm8xvPaLSLJIPZhJjIMhQsgGjxGTSHT6ZXrniVb0Z3GXi56QCOeq98le3H9FEoDSUE607vhebICXKMpxWuomGmNCx2SIHUslEaiDdHbr1D2xSt8dRMqWNO5M/T2REqH1RIS2UxAz0oteJv7ndRIzuApSJuPEoKTzRYOEuyZys8fdPlNIDZ9 YQqhi9laXjogi1Nh4SjYEf/HlZdI8q/rnVe/+olK7zuMowhEcwyn4cAk1uIU6NIDCJ7hFd4c4bw4787HvLXg5DOH8AfO5w8XCo5D</latexit> Ans <latexit sha1_base64="EzJauHCFVmw9rVYLAt7MIeB3Ps8=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPVi8eK9gPaUDbSbt0swm7G6GE/gQvHhTx6i/y5r9x 2+agrQ8GHu/NMDMvSATXxnW/ncLK6tr6RnGztLW9s7tX3j9o6jhVDBsFrFqB1Sj4BIbhuB7UQhjQKBrWB0O/VbT6g0j+WjGSfoR3QgecgZNVZ6uJa6V64VXcGsky8nFQgR71X/ur2Y5ZGKA0TVOuO5ybGz6gynAmclLqpxoSyER1gx1JI9R+Njt1Qk6s0idhrGxJQ2bq74mMRlqPo8B2RtQM9aI3Ff/zOqkJr/yMyQ1KNl8UZgKYmIy/Zv0uUJmxNg SyhS3txI2pIoyY9Mp2RC8xZeXSfOs6p1X3fuLSu0mj6MIR3AMp+DBJdTgDurQAYDeIZXeHOE8+K8Ox/z1oKTzxzCHzifPzNjbw=</latexit> T 0 j <latexit sha1_base64="8WPqCaIDG18Dswr9/97u5Grotk=">AB63icbVBNSwMxEJ2tX7V+VT16CRbRU9m1gh6LXjxW6Be0S8m2TY2yS5JVihL/4IXD4p49Q9589+YbfegrQ8GHu/NMDMviDnTxnW/ncLa+sbmV nG7tLO7t39QPjxq6yhRhLZIxCPVDbCmnEnaMsxw2o0VxSLgtBNM7jK/80SVZpFsmlMfYFHkoWMYJNJzcHj+aBcavuHGiVeDmpQI7GoPzVH0YkEVQawrHWPc+NjZ9iZRjhdFbqJ5rGmEzwiPYslVhQ7afzW2fozCpDFEbKljRorv6eSLHQeioC2ymwGetlLxP/83qJCW/8lMk4MVSxaIw4chEKHscDZmixPCpJZgoZm9FZIwVJsbGU7IheMsvr5L2ZdWrVd2Hq0r9No+jCdwChfgwTXU4R4a0AICY3iGV3hzhPivDsfi9aCk8cwx84nz+NB4 3o</latexit> T 0 k <latexit sha1_base64="6Ps7j3DCjP4TdyO7DF0/yE/WYZQ=">AB63icbVBNS8NAEJ3Ur1q/qh69LBbRU0lU0GPRi8cK/YI2lM120y7d3YTdjVBC/4IXD4p49Q9589+4SXPQ1gcDj/dmJkXxJxp47rfTmltfWNzq 7xd2dnd2z+oHh51dJQoQtsk4pHqBVhTziRtG2Y47cWKYhFw2g2m95nfaJKs0i2zCymvsBjyUJGsMmk1nB6PqzW3LqbA60SryA1KNAcVr8Go4gkgkpDONa67mx8VOsDCOcziuDRNMYkyke076lEguq/TS/dY7OrDJCYaRsSYNy9fdEioXWMxHYToHNRC97mfif109MeOunTMaJoZIsFoUJRyZC2eNoxBQlhs8swUQxeysiE6wMTaeig3BW35lXQu695V3X28rjXuijKcAKncAEe3EADHqAJbSAwgWd4hTdHOC/Ou/OxaC05xcwx/IHz+QOjI 3p</latexit> T 0 M <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> … E0 M <latexit sha1_base64="nta34gE+XG+4LV5XUqH2RD7n1o0=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ6KokKeiyK4EWoYNpCG8pmu2mX7m7C7kYob/BiwdFvPqDvPlv3LQ5aOuDgcd7M8zMCxPOtHdb2dpeWV1b b20Ud7c2t7ZreztN3WcKkJ9EvNYtUOsKWeS+oYZTtuJoliEnLbC0U3ut56o0iyWj2ac0EDgWQRI9hYyb/t3Z+Ue5WqW3OnQIvEK0gVCjR6la9uPyapoNIQjrXueG5igwrwink3I31TBZIQHtGOpxILqIJseO0HVumjKFa2pEFT9fdEhoXWYxHaToHNUM97ufif10lNdBVkTCapoZLMFkUpRyZG+eozxQlho8twUQxeysiQ6wMTafPARv/uVF0jyrec19+GiWr8u4ijBIRzBKXhwCXW4gwb4QIDBM7zCmyOdF+fd+Zi1LjnFzAH8gfP5A3 83jdA=</latexit> Tok0 M <latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit><latexit sha1_base64="(nul)">(nul)</latexit> … | {z } <latexit sha1_base64="i4jo7GwtGwKoN3YeH1gv lbskDwc=">ACBHicbVC7TsMwFHXKq5RXgLFLRIXEVCWlEoyVWBiLRB9SE1WOc9NadZzIdpCqKAMLv8LCA EKsfAQbf4PTZoCWI1k+OudeX9/jJ4xKZdvfRmVjc2t7p7pb29s/ODwyj0/6Mk4FgR6JWSyGPpbAKIeor BMBGAI5/BwJ/dFP7gAYSkMb9X8wS8CE84DSnBSktjs+6mPADhC0wgc6cyKW6nZScqz8dmw27aC1jrxClJA5 Xojs0vN4hJGgFXhGEpR45+x8uwUJQwyGtuKkEPmOEJjDTlOALpZYslcutcK4EVxkIfrqyF+rsjw5GU8jXl RFWU7nqFeJ/3ihV4bWXUZ6kCjhZDgpTZqnYKhKxAiqAKDbXBNB9V8tMsU6EKVzq+kQnNWV10m/1XQum/Zd u9Fpl3FUR2doQvkoCvUQbeoi3qIoEf0jF7Rm/FkvBjvxseytGKUPafoD4zPHyWfmFs=</latexit> | {z } <latexit sha1_base64="Q2q815ab42RF67VFAub46kmu5lk=">A ACBHicbVC7TsMwFHXKq5RXgLFLRIXEVCWlEoyVWBiLRB9SE1WOc9NadZzIdpCqKAMLv8LCAEKsfAQbf4PTZoCWI1k+OudeX9/jJ4xKZdv fRmVjc2t7p7pb29s/ODwyj0/6Mk4FgR6JWSyGPpbAKIeorBMBGAI5/BwJ/dFP7gAYSkMb9X8wS8CE84DSnBSktjs+6mPADhC0wgc6cyK e6WbScqz8dmw27aC1jrxClJA5Xojs0vN4hJGgFXhGEpR45+x8uwUJQwyGtuKkEPmOEJjDTlOALpZYslcutcK4EVxkIfrqyF+rsjw5GU8 jXlRFWU7nqFeJ/3ihV4bWXUZ6kCjhZDgpTZqnYKhKxAiqAKDbXBNB9V8tMsU6EKVzq+kQnNWV10m/1XQum/Zdu9Fpl3FUR2doQvkoCvU Qbeoi3qIoEf0jF7Rm/FkvBjvxseytGKUPafoD4zPHyQXmFo=</latexit> z }| { <latexit sha1_base64="WkmkOQqV4y/G2CwEGjey+GFekFc=">ACAnicbVDLSg MxFM3UV62vUVfiJlgEV2VGi7osuHFZwT6gM5RMeqcNzWSGJCOUobjxV9y4UMStX+HOvzHTzkJbD4Qczrn3JvcECWdKO863VpZXVvfKG9WtrZ3dvfs/YO2ilNJoUVjHstuQB RwJqClmebQTSQKODQCcY3ud95AKlYLO71JAE/IkPBQkaJNlLfPvJiYweSUMi8kUry+9J9HTat6tOzZkBLxO3IFVUoNm3v7xBTNMIhKacKNVzRw/I1IzymFa8VIFZv6YD KFnqCARKD+brTDFp0YZ4DCW5giNZ+rvjoxESk2iwFRGRI/UopeL/3m9VIfXfsZEkmoQdP5QmHKsY5zngQdMAtV8Ygihkpm/YjoiJg9tUquYENzFlZdJ+7zmXtScu3q1US/iK KNjdILOkIuUAPdoiZqIYoe0TN6RW/Wk/VivVsf89KSVfQcoj+wPn8A712XuA=</latexit> … … |Name of entity “Next”| |Possible answer “Ans”| z }| { <latexit sha1_base64="WkmkOQqV4y/G2CwEGjey+GFekFc=">ACAnicbVDLSg MxFM3UV62vUVfiJlgEV2VGi7osuHFZwT6gM5RMeqcNzWSGJCOUobjxV9y4UMStX+HOvzHTzkJbD4Qczrn3JvcECWdKO863VpZXVvfKG9WtrZ3dvfs/YO2ilNJoUVjHstuQB RwJqClmebQTSQKODQCcY3ud95AKlYLO71JAE/IkPBQkaJNlLfPvJiYweSUMi8kUry+9J9HTat6tOzZkBLxO3IFVUoNm3v7xBTNMIhKacKNVzRw/I1IzymFa8VIFZv6YD KFnqCARKD+brTDFp0YZ4DCW5giNZ+rvjoxESk2iwFRGRI/UopeL/3m9VIfXfsZEkmoQdP5QmHKsY5zngQdMAtV8Ygihkpm/YjoiJg9tUquYENzFlZdJ+7zmXtScu3q1US/iK KNjdILOkIuUAPdoiZqIYoe0TN6RW/Wk/VivVsf89KSVfQcoj+wPn8A712XuA=</latexit> Ans span sem[x, Q, clues] Prev1 <latexit sha1_base64="puK58MFBvgD1nt+jWdz8pL4eOM=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cK9gPaUDbSbt2kw27m0IJ/Q9ePCji1f/jzX/j ts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU1DJVDBtMCqnaAdUoeIwNw43AdqKQRoHAVjC6m/mtMSrNZfxoJgn6ER3EPOSMGis16wrHPa9XrhVdw6ySrycVCBHvVf+6vYlSyOMDRNU647nJsbPqDKcCZyWuqnGhLIRHWDH0phGqP1sfu2UnFmlT0KpbMWGzNXfExmNtJ5Ege2MqBnqZW8m/ud1UhPe+BmPk9RgzBaLwlQI8nsdLnCpk RE0soU9zeStiQKsqMDahkQ/CWX14lzYuqd1l1H64qtds8jiKcwCmcgwfXUIN7qEMDGDzBM7zCmyOdF+fd+Vi0Fpx85hj+wPn8ATvNjuU=</latexit> + <latexit sha1_base64="26BDQsRl0AvWjqpXxBRv cak+khY=">AB6HicbVBNS8NAEJ34WetX1aOXxSIQklU0GPRi8cW7Ae0oWy2k3btZhN2N0IJ/QVePCji1 Z/kzX/jts1BWx8MPN6bYWZekAiujet+Oyura+sbm4Wt4vbO7t5+6eCwqeNUMWywWMSqHVCNgktsG4EthOF NAoEtoLR3dRvPaHSPJYPZpygH9GB5CFn1Fipft4rld2KOwNZJl5OypCj1it9dfsxSyOUhgmqdcdzE+NnVBn OBE6K3VRjQtmIDrBjqaQRaj+bHTohp1bpkzBWtqQhM/X3REYjrcdRYDsjaoZ60ZuK/3md1IQ3fsZlkhqUbL 4oTAUxMZl+TfpcITNibAlitbCRtSRZmx2RtCN7iy8ukeVHxLitu/apcvc3jKMAxnMAZeHANVbiHGjSAc IzvMKb8+i8O/Ox7x1xclnjuAPnM8fci+MsQ=</latexit> X[Prev2] <latexit sha1_base64="34xctzRhXynC5MRSN2gEdH2p5uw=">AB+3icbVDLSsNAFL2pr1pfsS7dBIvgqiRV0GXRjcsK9gFpCJPpB06mYSZSbGE/IobF4q49U fc+TdO2iy09cDA4Zx7uWdOkDAqlW1/G5WNza3tnepubW/4PDIPK73ZJwKTLo4ZrEYBEgSRjnpKqoYGSCoChgpB9M7wq/PyNC0pg/qnlCvAiNOQ0pRkpLvlkfRkhNgjAb5G5HkJnf8nyzYTftBax14pSkASU6vk1HMU4jQhXmCEpXcdOlJchoShmJK8NU0kShKdoTFxNOYqI9LJF9tw618rICmOhH1fWQv29kaFIynkU6MkiqVz1CvE/z 01VeONlCepIhwvD4Ups1RsFUVYIyoIVmyuCcKC6qwWniCBsNJ1XQJzuqX10mv1XQum/bDVaN9W9ZRhVM4gwtw4BracA8d6AKGJ3iGV3gzcuPFeDc+lqMVo9w5gT8wPn8A/ZyUZQ=</latexit> X[Prev1] <latexit sha1_base64="TXQJqDIeE3FAztKM5jG5ip1FY+c=">AB+3icbVBNS8NAFHypX7V+xXr0slgETyVRQY9FLx4r2FpoQ9hsN+3SzSbsbol5K948aCIV/+ IN/+NmzYHbR1YGbe481OkHCmtON8W5W19Y3Nrep2bWd3b/APqx3VZxKQjsk5rHsBVhRzgTtaKY57SWS4ijg9DGY3Bb+45RKxWLxoGcJ9SI8EixkBGsj+XZ9EGE9DsKsl/fbk591/PthtN05kCrxC1JA0q0ftrMIxJGlGhCcdK9V0n0V6GpWaE07w2SBVNMJngEe0bKnBElZfNs+fo1ChDFMbSPKHRXP29keFIqVkUmMkiqVr2CvE/r5/q 8NrLmEhSTQVZHApTjnSMiLQkElKNJ8ZgolkJisiYywx0auminBXf7yKumeN92LpnN/2WjdlHVU4RhO4AxcuIW3EbOkDgCZ7hFd6s3Hqx3q2PxWjFKneO4A+szx/8F5Rk</latexit> y <latexit sha1_base64="cs1Q9fet/6GNtc+Tzw/y6WCTX8Y=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lU 0GPRi8cW7Ae0oWy2k3btZhN2N0Io/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4bua3n1BpHsHkyXoR3QoecgZNVZqZP1yxa26c5BV4uWkAjnq /fJXbxCzNEJpmKBadz03Mf6EKsOZwGmpl2pMKBvTIXYtlTRC7U/mh07JmVUGJIyVLWnIXP09MaGR1lkU2M6ImpFe9mbif143NeGNP+EySQ1KtlgUpoKYmMy+JgOukBmRWUKZ4vZWwkZUWZsNiUbgrf8ipXVS9y6rbuKrUbvM4inA Cp3AOHlxDe6hDk1gPAMr/DmPDovzrvzsWgtOPnMfyB8/kD6GeM/w=</latexit> Results of The Step of Visiting x X[x] <latexit sha1_base64="TklOcwpA1D9+YieOLHI872SLYNc=">AB9HicbVDLSgMxFL3js9ZX1aWbYBFclRkVdFl047KCfcB0KJk04Zm kjHJFMvQ73DjQhG3fow7/8ZMOwtPRA4nHMv9+SECWfauO63s7K6tr6xWdoqb+/s7u1XDg5bWqaK0CaRXKpOiDXlTNCmYbTqIojkNO2+HoNvfbY6o0k+LBTBIaxHgWMQINlYKujE2wzDKOlP/KehVqm7NnQEtE68gVSjQ6FW+un1J0pgKQzjW2vfcxAQZVoYRTqflbqpgskID6hvqcA x1UE2Cz1Fp1bpo0gq+4RBM/X3RoZjrSdxaCfzkHrRy8X/PD810XWQMZGkhgoyPxSlHBmJ8gZQnylKDJ9YgoliNisiQ6wMbansi3BW/zyMmd17yLmnt/Wa3fFHWU4BhO4Aw8uI63EDmkDgEZ7hFd6csfPivDsf89EVp9g5gj9wPn8AFqCSTA=</latexit> ( <latexit sha1_base64="G7zAfyjzE FCkqaqZiId0lVt+4=">ACHicbVDLSsNAFJ3UV62vqks3wSK4Kkr6LgxmUF+4C mlMnkJh06mYSZG6GEfogbf8WNC0XcuBD8G6ePRW09zMDhnHvzD1+KrhGx/mxChubW 9s7xd3S3v7B4VH5+KStk0wxaLFEJKrUw2CS2ghRwHdVAGNfQEdf3Q79TuPoDRP5AO OU+jHNJI85IyikQbluicgRC8veT5EXOZU8EhCMCl53uyADJY0xaMhVgflilN1ZrDXi bsgFbJAc1D+8oKEZTFIZIJq3XOdFPs5VciZADM305BSNqIR9AyVNAbdz2fLTewLow R2mChzJdozdbkjp7HW49g3lTHFoV71puJ/Xi/D8Kafc5lmCJLNHwozYWNiT5OyA6A oRgbQpni5q82G1JFGZo8SyYEd3XldKuVd161bm/qjRqiziK5Iyck0vikmvSIHekSV qEkSfyQt7Iu/VsvVof1ue8tGAtek7JH1jfvxlvoU8=</latexit> ∆[x] <latexit sha1_base64="Ix9kWd0zRNXSI2F6B6c JwYtcDM=">AB8HicbVBNS8NAEJ34WetX1aOXYBE8lUQFPRb14LGC/ZA0lM12y7d3YTdiVhCf4UXD4p49 ed489+4bXPQ1gcDj/dmJkXJYIb9LxvZ2l5ZXVtvbBR3Nza3tkt7e03TJxqyuo0FrFuRcQwRWrI0fBWolmR EaCNaPh9cRvPjJteKzucZSwUJK+4j1OCVrpoX3DBJLgKeyUyl7Fm8JdJH5OypCj1il9tbsxTSVTSAUxJvC9 BMOMaORUsHGxnRqWEDokfRZYqohkJsymB4/dY6t03V6sbSl0p+rviYxIY0Yysp2S4MDMexPxPy9IsXcZlw lKTJFZ4t6qXAxdifu12uGUxsoRQze2tLh0QTSjajIo2BH/+5UXSOK34ZxXv7rxcvcrjKMAhHMEJ+HABVb iFGtSBgoRneIU3RzsvzrvzMWtdcvKZA/gD5/MHpOSQTA=</latexit> Pass clues to “Next”“Ans” System 2 (GNN) Cognitive Graph G Before Visiting x … … … W1 <latexit sha1_base64="eRV+cFYyUVwcnsS DasavMiXevIM=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V7Qe0oWy2m3bpZh N2J0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekEh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTj VjDdZLGPdCajhUijeRIGSdxLNaRI3g7GtzO/cS1EbF6xEnC/YgOlQgFo2ilh3bf65crbtWd g6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fjY/dUrOrDIgY axtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms7/JQGjOUE4soUwLey thI6opQ5tOyYbgLb+8SloXVe+y6t7XKvWbPI4inMApnIMHV1CHO2hAExgM4Rle4c2Rzovz7nws WgtOPnMf+B8/gDaBY2B</latexit> W2 <latexit sha1_base64="Up5tOwgSUXlwkA Gw0dYCaLo54=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Qe0oWy2k3bpZh N2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nG qGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0O7X+uWKW3Xn IKvEy0kFcjT65a/eIGZphNIwQbXuem5i/Iwqw5nAamXakwoG9Mhdi2VNELtZ/NTp+TMKgMSx sqWNGSu/p7IaKT1JApsZ0TNSC97M/E/r5ua8NrPuExSg5ItFoWpICYms7/JgCtkRkwsoUxey thI6oMzadkg3BW35lbRqVe+i6t5fVuo3eRxFOIFTOAcPrqAOd9CAJjAYwjO8wpsjnBfn3flY tBacfOY/sD5/AHbiY2C</latexit> Figure 2: Overview of CogQA implementation. When visiting the node x, System 1 generates new hop and answer nodes based on the clues[x, G] discovered by System 2. It also creates the inital representation sem[x, Q, clues], based on which the GNN in System 2 updates the hidden representations X[x]. paragraphs is to access some specific paragraphs by their title indexes. For multi-hop questions, traditional retrieval-extraction frameworks might sacrifice the potential of follow-up models, because paragraphs multiple hops away from the question could share few common words and little semantic relation with the question, leading to a failed retrieval. However, these paragraphs can be discovered by iteratively expanding with clues in our framework. Algorithm 1 describes the procedure of our framework CogQA. After initialization, an iterative process for graph expansion and reasoning begins. In each step we visit a frontier node x, and System 1 reads para[x] under the guidance of clues and the question Q, extracts spans and generates semantic vector sem[x, Q, clues]. Meanwhile, System 2 updates hidden representation X and prepares clues[y, G] for any successor node y. The final prediction is made based on X. 3 Implementation The main part to implement the CogQA framework is to determine the concrete models of System 1 and 2, and the form of clues. Our implementation uses BERT as System 1 and GNN as System 2. Meanwhile, clues[x, G] are sentences in paragraphs of x’s predecessor nodes, from which x is extracted. We directly pass raw sentences as clues, rather than any form of computed hidden states, for easy training of System 1. Because raw sentences are self-contained and independent of computations from previous iterative steps, training at different iterative steps is then decoupled, leading to efficiency gains during training. Details are introduced in § 3.4. Hidden representations X for graph nodes are updated each time by a propagation step of GNN. Our overall model is illustrated in Figure 2. 3.1 System 1 The extraction capacity of System 1 model is fundamental to construct the cognitive graph, thus a powerful model is needed. Recently, BERT (Devlin et al., 2018) has become one of the most successful language representation models on various NLP tasks, including SQuAD (Rajpurkar et al., 2016). BERT consists of multiple layers of Transformer (Vaswani et al., 2017), a self-attention based architecture, and is elaborately pre-trained on large corpora. Input sentences are composed of two different functional parts A and B. We use BERT as System 1, and its input when visiting the node x is as follows: [CLS] Question [SEP] clues[x, G] [SEP] | {z } Sentence A Para[x] | {z } Sentence B 2697 where clues[x, G] are sentences passed from predecessor nodes. The output vectors of BERT are denoted as T ∈RL×H, where L is the length of the input sequence and H is the dimension size of the hidden representations. It is worth noting that for answer node x, Para[x] is probably missing. Thus we do not extract spans but can still calculate sem[x, Q, clues] based on “Sentence A” part. And when extracting 1-hop nodes from question to initialize G, we do not calculate semantic vectors and only the Question part exists in the input. Span Extraction Answers and next-hop entities have different properties. Answer extraction relies heavily on the character indicated by the question. For example “New York City” is more possible to be the answer of a where question than “2019”, while next-hop entities are often the entities whose description matches statements in the question. Therefore, we predict answer spans and next-hop spans separately. We introduce “pointer vectors” Shop, Ehop, Sans, Eans as additional learnable parameters to predict targeted spans. The probability of the ith input token to be the start of an answer span P start ans [i] is calculated as follows: P start ans [i] = eSans·Ti P j eSans·Tj (1) Let P end ans [i] be the probability of the ith input token to be the end of an answer span, which can be calculated following the same formula. We only focus on the positions with top K start probabilities {startk}. For each k, the end position endk is given by: endk = arg max startk≤j≤startk+maxL P end ans [j] (2) where maxL is the maximum possible length of spans. To identify irrelevant paragraphs, we leverage negative sampling introduced in § 3.4.1 to train System 1 to generate a negative threshold. In top K spans, those whose start probability is less than the negative threshold will be discarded. Because the 0th token [CLS] is pre-trained to synthesize all input tokens for the Next Sentence Prediction task (Devlin et al., 2018), P start ans [0] acts as the threshold in our implementation. We expand the cognitive graph with remaining predicted answer spans as new “answer nodes”. The same process is followed to expand “next-hop nodes” by replacing Sans, Eans with Shop, Ehop. Semantics Generation As mentioned above, outputs of BERT at position 0 have the ability to summarize the sequence. Thus the most straightforward method is to use T0 as sem[x, Q, clues]. However, the last few layers in BERT are mainly in charge of transforming hidden representations for span predictions. In our experiment, the usage of the third-to-last layer output at position 0 as sem[x, Q, clues] performs the best. 3.2 System 2 The first function of System 2 is to prepare clues[x, G] for frontier nodes, which we implement it as collecting the raw sentences of x’s predecessor nodes that mention x. The second function, to update hidden representations X, is the core function of System 2. Hidden representations X ∈Rn×H stand for the understandings of all n entities in G. To fully understand the relation between an entity x and the question Q, barely analyzing semantics sem[x, Q, clues] is insufficient. GNN has been proposed to perform deep learning on graph (Kipf and Welling, 2017), especially relational reasoning owing to the inductive bias of graph structure (Battaglia et al., 2018). In our implementation, a variant of GNN is designed to serve as System 2. For each node x, the initial hidden representation X[x] ∈RH is the semantic vector sem[x, Q, clues] from System 1. Let X′ be the new hidden representations after a propagation step of GNN, and ∆∈Rn×H be aggregated vectors passed from neighbours in the propagation. The updating formulas of X are as follows: ∆= σ((AD−1)T σ(XW1)) (3) X′ = σ(XW2 + ∆) (4) where σ is the activation function and W1, W2 ∈ RH×H are weight matrices. A is the adjacent matrix of G, which is column-normalized to AD−1 where Djj = P i Aij. Transformed hidden vector σ(XW1) is left multiplied by (AD−1)T , which can be explained as a localized spectral filter by Defferrard et al. (2016). In the iterative step of visiting frontier node x, its hidden representation X[x] is updated following Equation (3)(4). In experiments, we observe that this “asynchronous updating” shows no apparent difference in performance with updating X of all the nodes together by multiple steps after G is 2698 finalized, which is more efficient and adopted in practice. 3.3 Predictor The questions in HotpotQA dataset generally fall into three categories: special question, alternative question and general question, which are treated as three different downstream prediction tasks taking X as input. In the test set, they can also be easily categorized according to interrogative words. Special question is the most common case, requesting to find spans such as locations, dates or entity names in paragraphs. We use a two-layer fully connected network (FCN) to serve as predictor F: answer = arg max answer node x F(X[x]) (5) Alternative and general question both aims to compare a certain property of entity x and y in HotpotQA, respectively answered with entity name and “yes or no”. These questions are regarded as binary classification with input X[x] − X[y] and solved by another two identical FCNs. 3.4 Training Our model is trained under a supervised paradigm with negative sampling. In the training set, the next-hop and answer spans are pre-extracted in paragraphs. More exactly, for each para[x] relevant to question Q, we have spans data D[x, Q] = {(y1, start1, end1), ..., (yn, startn, endn)} where the span from starti to endi in para[x] is fuzzy matched with the name of an entity or answer yi. See § 4.1 for detail. 3.4.1 Task #1: Span Extraction The ground truths of P start ans , P end ans , P start hop , P end hop are constructed based on D[x, Q]. There is at most one answer span (y, start, end) in every paragraph, thus gtstart ans is an one-hot vector where gtstart ans [start] = 1. However, multiple different next-hop spans might appear in one paragraph, so that gtstart hop [starti] = 1/k where k is the number of next-hop spans. For the sake of the ability to discriminate irrelevant paragraphs, irrelevant negative hop nodes are added to G in advance. As mentioned in § 3.1, the output of [CLS], T0, is in charge of generating negative threshold. Therefore, P start ans for each negative hop node is the one-hot vector where gtstart ans [0] = 1. Cross entropy loss is used to train the span extraction task in System 1. The losses for the end position and for the next-hop spans are defined in the same way as follows. Lstart ans = − X i gtstart ans [i] · log P start ans [i] (6) 3.4.2 Task #2: Answer Node Prediction To command the reasoning ability, our model must learn to identify the correct answer node from a cognitive graph. For each question in the training set, we construct a training sample for this task. Each training sample is a composition of the goldonly graph, which is the union of all correct reasoning paths, and negative nodes. Negative nodes include negative hop nodes used in Task #1 and two negative answer nodes. A negative answer node is constructed from a span extracted at random from a randomly chosen hop node. For special question, we first compute the final answer probabilities for each node by performing softmax on the outputs of F. Loss L is defined as cross entropy between the probabilities and onehot vector of answer node ans. L = −log  softmax F(X)  [ans]  (7) Alternative and general questions are optimized by binary cross entropy in similar ways. The losses of this task not only are back-propagated to optimize predictors and System 2, but also fine-tune System 1 through semantic vectors sem[x, Q, clues]. 4 Experiment 4.1 Dataset We use the full-wiki setting of HotpotQA to conduct our experiments. 112,779 questions are collected by crowdsourcing based on the first paragraphs in Wikipedia documents, 84% of which require multi-hop reasoning. The data are split into a training set (90,564 questions), a development set (7,405 questions) and a test set (7,405 questions). All questions in development and test sets are hard multi-hop cases. In the training set, for each question, an answer and paragraphs of 2 gold (useful) entities are provided, with multiple supporting facts, sentences containing key information for reasoning, marked out. There are also 8 unhelpful negative paragraphs for training. During evaluation, only questions are offered and meanwhile supporting facts are required besides the answer. 2699 To construct cognitive graphs for training, edges in gold-only cognitive graphs are inferred from supporting facts by fuzzy matching based on Levenshtein distance (Navarro, 2001). For each supporting fact in para[x], if any gold entity or the answer, denoted as y, is fuzzy matched with a span in the supporting fact, edge (x, y) is added. 4.2 Experimental Details We use pre-trained BERT-base model released by (Devlin et al., 2018) in System 1. The hidden size H is 768, unchanged in node vectors of GNN and predictors. All the activation functions in our model are gelu (Hendrycks and Gimpel, 2016). We train models on Task #1 for 1 epoch and then on Task #1 and #2 jointly for 1 epoch. Hyperparameters in training are as follows: Model Task batch size learning rate weight decay BERT #1,#2 10 10−4, 4 × 10−5 0.01 GNN #2 graph 10−4 0 BERT and GNN are optimized by two different Adam optimizers, where β1 = 0.9, β2 = 0.999. The predictors share the same optimizer as GNN. The learning rate for parameters in BERT warmup over the first 10% steps, and then linearly decays to zero. To select out supporting facts, we just regard the sentences in the clues of any node in graph as supporting facts. In the initialization of G, these 1-hop spans exist in the question and can also be detected by fuzzy matching with supporting facts in training set. The extracted 1-hop entities by our framework can improve the retrieval phase of other models (See § 4.3), which motivated us to separate out the extraction of 1-hop entities to another BERT-base model for the purpose of reuse in implementation. 4.3 Baselines The first category is previous work or competitor: • Yang et al. (2018) The strong baseline model proposed in the original HotpotQA paper (Yang et al., 2018). It follows the retrieval-extraction framework of DrQA (2017) and subsumes the advanced techniques in QA, such as self-attention, character-level model, bi-attention. • GRN, QFE, DecompRC, MultiQA The other models on the leaderboard.3 3All these models are unpublished before this paper. • BERT State-of-art model on single-hop QA. BERT in original paper requires singleparagraph input and pre-trained BERT can barely handle paragraphs of at most 512 tokens, much fewer than the average length of concatenated paragraphs. We add relevant sentences from predecessor nodes in the cognitive graph to every paragraphs and report the answer span with maximum start probability in all paragraphs. • Yang et al. (2018)-IR Yang et al. (2018) with Improved Retrieval. Yang et al. (2018) uses traditional inverted index filtering strategy to retrieve relevant paragraphs. The effectiveness might be challenged due to its failures to find out entities mentioned in question sometimes. The main reason is that word-level matching in retrieval usually neglect language models, which indicates importance and POS of words. We improve the retrieval by adding 1-hop entities spotted in the question by our model, increasing the coverage of supporting facts from 56% to 72%. Another category is for ablation study: • CogQA-onlyR model initializes G with the same entities retrieved in Yang et al. (2018) as 1-hop entities, mainly for fair comparison. • CogQA-onlyQ initializes G only with 1-hop entities extracted from question, free of retrieved paragraphs. Complete CogQA implementation uses both. • CogQA-sys1 only retains System 1 and lacks cascading reasoning in System 2. 4.4 Results Following Yang et al. (2018), the evaluation of answer and supporting facts consists of two metrics: Exact Match (EM) and F1 score. Joint EM is 1 only if answer string and supporting facts are both strictly correct. Joint precision and recall are the products of those of Ans and Sup, and then joint F1 is calculated. All results of these metrics are averaged over the test set.4 Experimental results show superiority of our method in multiple aspects: Overall Performance Our CogQA outperforms all baselines on all metrics by a significant margin (See Table 1). The leap of performance mainly 4Thus it is possible that overall F1 is lower than both precision and recall. 2700 Model Ans Sup Joint EM F1 Prec Rec EM F1 Prec Rec EM F1 Prec Rec Dev Yang et al. (2018) 23.9 32.9 34.9 33.9 5.1 40.9 47.2 40.8 2.5 17.2 20.4 17.8 Yang et al. (2018)-IR 24.6 34.0 35.7 34.8 10.9 49.3 52.5 52.1 5.2 21.1 22.7 23.2 BERT 22.7 31.6 33.4 31.9 6.5 42.4 54.6 38.7 3.1 17.8 24.3 16.2 CogQA-sys1 33.6 45.0 47.6 45.4 23.7 58.3 67.3 56.2 12.3 32.5 39.0 31.8 CogQA-onlyR 34.6 46.2 48.8 46.7 14.7 48.2 56.4 47.7 8.3 29.9 36.2 30.1 CogQA-onlyQ 30.7 40.4 42.9 40.7 23.4 49.9 56.5 48.5 12.4 30.1 35.2 29.9 CogQA 37.6 49.4 52.2 49.9 23.1 58.5 64.3 59.7 12.2 35.3 40.3 36.5 Test Yang et al. (2018) 24.0 32.9 3.86 37.7 1.9 16.2 QFE 28.7 38.1 14.2 44.4 8.7 23.1 DecompRC 30.0 40.7 N/A N/A N/A N/A MultiQA 30.7 40.2 N/A N/A N/A N/A GRN 27.3 36.5 12.2 48.8 7.4 23.6 CogQA 37.1 48.9 22.8 57.7 12.4 34.9 Table 1: Results on HotpotQA (fullwiki setting). The test set is not public. The maintainer of HotpotQA only offers EM and F1 for every submission. N/A means the model cannot find supporting facts. results from the superiority of the CogQA framework over traditional retrieval-extraction methods. Since paragraphs that are multi-hop away may share few common words literally or even little semantic relation with the question, retrievalextraction framework fails to find the paragraphs that become related only after the reasoning clues connected to them are found. Our framework, however, gradually discovers relevant entities following clues. Logical Rigor QA systems are often criticized to answer questions with shallow pattern matching, not based on reasoning. To evaluate logical rigor of QA, we use JointEM AnsEM , the proportion of “joint correct answers” in correct answers. The joint correct answers are those deduced from all necessary and correct supporting facts. Thus, this proportion stands for logical rigor of reasoning. The proportion of our method is up to 33.4%, far outnumbering 7.9% of Yang et al. (2018) and 30.3% of QFE. Question type Joint F1 score Average hops 0 0.1 0.2 0.3 0.4 0.5 0.6 2.4 2.6 2.8 alternative general which what who other where when Average hops CGQA CGQAonlyR BIR Yang et al. (2018) Question type 3.0 3.2 2.2 2.0 Yang et al. (2018) Average hops Yang et al. (2018)-IR CogQA CogQA-onlyR Figure 3: Model performance on 8 types of questions with different hops. Multi-hop Reasoning Figure 3 illustrates joint F1 scores and average hops of 8 types of questions, including general, alternative and special questions with different interrogative word. As the hop number increases, the performance of Yang et al. (2018) and Yang et al. (2018)-IR drops dramatically, while our approach is surprisingly robust. However, there is no improvement in alternative and general questions, because the evidence for judgment cannot be inferred from supporting facts, leading to lack of supervision. Further human labeling is needed to answer these questions. Ablation Studies To study the impacts of initial entities in cognitive graphs, CogQA-onlyR begins with the same initial paragraphs as (Yang et al., 2018). We find that CogQA-onlyR still performs significantly better. The performance decreases slightly compared to CogQA, indicating that the contribution mainly comes from the framework. To compare against the retrieval-extraction framework, CogQA-onlyQ is designed that it only starts with the entities that appear in the question. Free of elaborate retrieval methods, this setting can be regarded as a natural thinking pattern of human being, in which only explicit and reliable relations are needed in reasoning. CogQA-onlyQ still outperforms all the baselines, which may reveal the superiority of CogQA framework over the retrieval-extraction framework. BERT is not the key factor of improvement, although plays a necessary role. Vanilla BERT performs similar or even slightly poorer to (Yang et al., 2018) in this multi-hop QA task, possibly because of the pertinently designed architectures in Yang et al. (2018) to better leverage supervision of supporting facts. To investigate the impacts of the absence of System 2, we design a System 1 only approach, CogQA-sys1, which inherits the iterative frame2701 4.HQ3UXLWWZDVD5HSXEOLFDQPHPEHU RIDQXSSHUKRXVHRIWKHOHJLVODWXUHZLWK KRZPDQ\PHPEHUV" .HQ3UXLWW )ORULGD+RXVHRI 5HSUHVHQWDWLYHV )ORULGD 6HQDWH   4:KDW&DVRQ&$VRFFHUWHDP IHDWXUHVWKHVRQRI5R\/DVVLWHU" 5R\/DVVLWHU /$ *DOD[\ $ULHO /DVVLWHU 4:KDW/LWKXDQLDQSURGXFHULVEHVW NQRZQIRUDVRQJWKDWZDVRQHRIWKH PRVWSRSXODUVRQJVLQ,EL]DLQ" :DONLQJZLWK (OHSKDQWV 7HQ:DOOV 0DULMXV $GRPDLWLV 5HWULHYHG  7UHH  '$*  &\FOLF*UDSK 0DULMXV $GRPDLWLV KLVVWDJHQDPH 7HQ:DOOV .HQ3UXLWWZDVD 5HSXEOLFDQ PHPEHURIWKH )ORULGD6HQDWHŏ ŏSUHYLRXVO\D PHPEHURIWKH )ORULGD+RXVHRI 5HSUHVHQWDWLYHV 7KH6HQDWHKDV PHPEHUVŏ 7KH+RXVHLVFRPSRVHG RIPHPEHUVŏ +HLVWKHIDWKHU RI/$*DOD[\ SOD\HU$ULHO /DVVLWHU ŏKHKDGVLJQHG ZLWK/$*DOD[\ŏ ŏLVDVRQJ E\/LWKXDQLDQ SURGXFHU7HQ :DOOV ŏEHVWNQRZQIRU KLVVLQJOH ʼn:DONLQJZLWK (OHSKDQWVŊ Figure 4: Case Study. Different forms of cognitive graphs in our results, i.e., Tree, Directed Acyclic Graph (DAG), Cyclic Graph. Circles are candidate answer nodes while rounded rectangles are hop nodes. Green circles are the final answers given by CogQA and check marks represent the annotated ground truth. work but outputs answer spans with maximum predicted probability. On Ans metrics, the improvement over the best competitor decreases about 50%, highlighting the reasoning capacity of GNN on cognitive graphs. Case Study We show how the cognitive graph clearly explains complex reasoning processes in our experiments in Figure 4. The cognitive graph highlights the heart of the question in case (1) – i.e., to choose between the number of members in two houses. CogQA makes the right choice based on semantic similarity between “Senate” and “upper house”. Case (2) illustrates that the robustness of the answer can be boosted by exploring parallel reasoning paths. Case (3) is a semantic retrieval question without any entity mentioned, which is intractable for CogQA-onlyQ or even human. Once combined with information retrieval, our model finally gets the answer “Marijus Adomaitis” while the annotated ground truth is “Ten Walls”. However, when backtracking the reasoning process in cognitive graph, we find that the model has already reached “Ten Walls” and answers with his real name, which is acceptable and even more accurate. Such explainable advantages are not enjoyed by black-box models. 5 Related work Machine Reading Comprehension The research focus of machine reading comprehension (MRC) has been gradually transferred from cloze-style tasks (Hermann et al., 2015; Hill et al., 2015) to more complex QA tasks (Rajpurkar et al., 2016) recent years. Compared to the traditional computational linguistic pipeline (Hermann et al., 2015), neural network models, for example BiDAF (Seo et al., 2017a) and R-net (Wang et al., 2017), exhibit outstanding capacity for answer extraction in text. Pre-trained on large corpra, recent BERTbased models nearly settle down the single paragraph MRC-QA problem with performances beyond human-level, driving researchers to pay more attention to multi-hop reasoning. Multi-Hop QA Pioneering datasets of multi-hop QA are either based on limited knowledge base schemas (Talmor and Berant, 2018), or under multiple choices setting (Welbl et al., 2018). The noise in these datasets also restricted the development of multi-hop QA until high-quality HotpotQA (Yang et al., 2018) is released recently. The idea of “multi-step reasoning” also breeds multi-turn methods in single paragraph QA (Kumar et al., 2016; Seo et al., 2017b; Shen et al., 2017), assuming that models can capture information at deeper level implicitly by reading the text again. Open-Domain QA Open-Domain QA (QA at scale) refers to the setting where the search space of the supporting evidence is extremely large. Approaches to get paragraph-level answers has been thoroughly investigated by the information retrieval community, which can be dated back to the 1990s (Belkin, 1993; Voorhees et al., 1999; Moldovan et al., 2000). Recently, DrQA (Chen et al., 2017) leverages a neural model to extract the accurate answer from retrieved paragraphs, usually called retrieval-extraction framework, greatly advancing this time-honored research topic again. Improvements are made to enhance retrieval by heuristic sampling (Clark and Gardner, 2018) or 2702 reinforcement learning (Hu et al., 2018; Wang et al., 2018a), while for complex reasoning, necessary revisits to the framework are neglected. 6 Discussion and Conclusion We present a new framework CogQA to tackle multi-hop machine reading problem at scale. The reasoning process is organized as cognitive graph, reaching unprecedented entity-level explainability. Our implementation based on BERT and GNN obtains state-of-art results on HotpotQA dataset, which shows the efficacy of our framework. Multiple future research directions may be envisioned. Benefiting from the explicit structure in the cognitive graph, System 2 in CogQA has potential to leverage neural logic techniques to improve reliability. Moreover, we expect that prospective architectures combining attention and recurrent mechanisms will largely improve the capacity of System 1 by optimizing the interaction between systems. Finally, we believe that our framework can generalize to other cognitive tasks, such as conversational AI and sequential recommendation. Acknowledgements The work is supported by Development Program of China (2016QY01W0200), NSFC for Distinguished Young Scholar (61825602), NSFC (61836013), and a research fund supported by Alibaba. The authors would like to thank Junyang Lin, Zhilin Yang and Fei Sun for their insightful feedback, and responsible reviewers of ACL 2019 for their valuable suggestions. References Alan Baddeley. 1992. Working memory. Science, 255(5044):556–559. Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. 2018. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261. Nicholas J. Belkin. 1993. Interaction with texts: Information retrieval as information-seeking behavior. In Information Retrieval. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1870–1879. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 845–855. Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jonathan St BT Evans. 1984. Heuristic and analytic processes in reasoning. British Journal of Psychology, 75(4):451–468. Jonathan St BT Evans. 2003. In two minds: dualprocess accounts of reasoning. Trends in cognitive sciences, 7(10):454–459. Jonathan St BT Evans. 2008. Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol., 59:255–278. Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arXiv preprint arXiv:1606.08415. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehension. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4099– 4106. AAAI Press. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Daniel Kahneman and Patrick Egan. 2011. Thinking, fast and slow, volume 1. Farrar, Straus and Giroux New York. 2703 Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pages 1378–1387. Dan Moldovan, Sanda Harabagiu, Marius Pasca, Rada Mihalcea, Roxana Girju, Richard Goodrum, and Vasile Rus. 2000. The structure and performance of an open-domain question answering system. In Proceedings of the 38th annual meeting on association for computational linguistics, pages 563–570. Association for Computational Linguistics. Gonzalo Navarro. 2001. A guided tour to approximate string matching. ACM computing surveys (CSUR), 33(1):31–88. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017a. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations. Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2017b. Query-reduction networks for question answering. In International Conference on Learning Representations. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1047–1055. ACM. Steven A Sloman. 1996. The empirical case for two systems of reasoning. Psychological bulletin, 119(1):3. Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 641–651. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, volume 99, pages 77– 82. Citeseer. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018a. r3: Reinforced ranker-reader for open-domain question answering. In Thirty-Second AAAI Conference on Artificial Intelligence. Wei Wang, Ming Yan, and Chen Wu. 2018b. Multigranularity hierarchical attention fusion networks for reading comprehension and question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1705–1714. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Yingxu Wang, Dong Liu, and Ying Wang. 2003. Discovering the capacity of human memory. Brain and Mind, 4(2):189–198. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association of Computational Linguistics, 6:287–302. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380.
2019
259
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 263–272 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 263 Relation Embedding with Dihedral Group in Knowledge Graph Canran Xu ∗ eBay Inc. [email protected] Ruijiang Li ∗ eBay Inc. [email protected] Abstract Link prediction is critical for the application of incomplete knowledge graph (KG) in the downstream tasks. As a family of effective approaches for link predictions, embedding methods try to learn low-rank representations for both entities and relations such that the bilinear form defined therein is a well-behaved scoring function. Despite of their successful performances, existing bilinear forms overlook the modeling of relation compositions, resulting in lacks of interpretability for reasoning on KG. To fulfill this gap, we propose a new model called DihEdral, named after dihedral symmetry group. This new model learns knowledge graph embeddings that can capture relation compositions by nature. Furthermore, our approach models the relation embeddings parametrized by discrete values, thereby decrease the solution space drastically. Our experiments show that DihEdral is able to capture all desired properties such as (skew-) symmetry, inversion and (non-) Abelian composition, and outperforms existing bilinear form based approach and is comparable to or better than deep learning models such as ConvE (Dettmers et al., 2018). 1 Introduction Large-scale knowledge graph (KG) plays a critical role in the downstream tasks such as semantic search (Berant et al., 2013), dialogue management (He et al., 2017) and question answering (Bordes et al., 2014). In most cases, despite of its large scale, KG is not complete due to the difficulty to enumerate all facts in the real world. The capability of predicting the missing links based on existing dataset is one of the most important research topics for years. A common representation of KG is a set of triples (head, relation, tail), and the problem of link prediction can be viewed as predicting new triples from the existing set. A ∗Equal contribution. popular approach is KG embeddings, which maps both entities and relations in the KG to a vector space such that the scoring function of entities and relations for ground truth distinguishes from false facts (Socher et al., 2013; Bordes et al., 2013; Yang et al., 2015). Another family of approaches explicitly models the reasoning process on KG by synthesizing information from paths (Guu et al., 2015). More recently, researchers are applying deep learning methods to KG embeddings so that non-linear interaction between entities and relations are enabled (Schlichtkrull et al., 2018; Dettmers et al., 2018). The standard task for link prediction is to answer queries (h, r, ?) or (? r, t). In this context, recent works on KG embedding focusing on bilinear form methods (Trouillon et al., 2016; Nickel et al., 2016; Liu et al., 2017; Kazemi and Poole, 2018) are known to perform reasonably well. The success of this pack of models resides in the fact they are able to model relation (skew-) symmetries. Furthermore, when serving for downstream tasks such as learning first-order logic rule and reasoning over the KG, the learned relation representation is expected to discover relation composition by itself. One key property of relation composition is that in many cases it can be noncommutative. For example, exchanging the order between parent_of and spouse_of will result in completely different relation (parent_of as opposed to parent_in_law_of). We argue that, in order to learn relation composition within the link prediction task, this non-commutative property should be explicitly modeled. In this paper, we proposed DihEdral to model the relation in KG with the representation of dihedral group. The elements in a dihedral group are constructed by rotation and reflection operations over a 2D symmetric polygon. As the matrix representations of dihedral group can be symmetric or skew-symmetric, and the multiplication of the 264 group elements can be Abelian or non-Abelian, it is a good candidate to model the relations with all the corresponding properties desired. To the best of our knowledge, this is the first attempt to employ finite non-Abelian group in KG embedding to account for relation compositions. Besides, another merit of using dihedral group is that even the parameters are quantized or even binarized, the performance in link prediction tasks can be improved over state-of-the-arts methods in bilinear form due to the implicit regularization imposed by quantization. The rest of paper is organized as follows: in (§2) we present the mathematical framework of bilinear form modeling for link prediction task, followed by an introduction to group theory and dihedral group. In (§3) we formalize a novel model DihEdral to represent relations with fully expressiveness. In (§4, §5) we develop two efficient ways to parametrize DihEdral and reveal that both approaches outperform existing bilinear form methods. In (§6) we carried out extensive case studies to demonstrate the enhanced interpretability of relation embedding space by showing that the desired properties of (skew-) symmetry, inversion and relation composition are coherent with the relation embeddings learned from DihEdral. 2 Preliminaries 2.1 Bilinear From for KB Link Prediction Let E and R be the set of entities and relations. A triple (h, r, t), where {h, t} ∈E are the head and tail entities, and r ∈R is a relation corresponding to an edge in the KG. In a bilinear form, the entities h, t are represented by vectors h, t ∈RM where M ∈Z+, and relation r is represented by a matrix R ∈RM×M. The score for the triple is defined as φ(h, r, t) = h⊤Rt. A good representation of the entities and relations are learned such that the scores are high for positive triples and low for negative triples. 2.2 Group and Dihedral Group Let gi, gj be two elements in a set G, and ⊙be a binary operation between any two elements in G . The set G forms a group when the following axioms are satisfied: Closure For any two element gi, gj ∈G, gk = gi ⊙gj is also an element in G. Associativity For any gi, gj, gk ∈G, (gi ⊙gj) ⊙ gk = gi ⊙(gj ⊙gk). Identity There exists an identity element e in G such that, for every element g in G, the equation e ⊙g = g ⊙e = g holds. Inverse For each element g, there is its inverse element g−1 such that g ⊙g−1 = g−1 ⊙g = e. If the number of group elements is finite, the group is called a finite group. If the group operation is commutative, i.e. gi ⊙gj = gj ⊙gi for all gi and gj, the group is called Abelian; otherwise the group is non-Abelian. Moreover, if the group elements can be represented by a matrix, with group operations defined as matrix multiplications, the identity element is represented by the identity matrix and the inverse element is represented as matrix inverse. In the following, we will not distinguish between group element and its corresponding matrix representation when no confusion exists. A dihedral group is a finite group that supports symmetric operations of a regular polygon in two dimensional space. Here the symmetric operations refer to the operator preserving the polygon. For a K-side (K ∈Z+) polygon, the corresponding dihedral group is denoted as DK that consists of 2K elements, within which there are K rotation operators and K reflection operators. A rotation operator Ok rotates the polygon anti-clockwise around the center by a degree of (2πm/K), and a reflection operator Fk mirrors the rotation Ok vertically. Figure 1: Elements in D4. Each subplot represents result after applying the corresponding operator to the square of ‘ACL’ on the upper left corner (on top of O(0) 4 ). The top row corresponds to the rotation operators and the bottom row corresponds to the reflection operators. The element in the dihedral group DK can be 265 represented as 2D orthogonal matrices1: O(m) K = " cos 2πm K  −sin 2πm K  sin 2πm K  cos 2πm K  # F (m) K = " cos 2πm K  sin 2πm K  sin 2πm K  −cos 2πm K  # (1) where m ∈{0, 1, · · · , K}. Correspondingly, the group operation of dihedral group can be represented as multiplication of the representation matrices. Note that when K is evenly divided by 4, rotation matrices O(K/4) K and O(3K/4) K are skewsymmetric, and all the reflection matrices F (m) K and rotation matrices O(0) K , O(K/2) K are symmetric. The representation of D4 is shown in Figure 1. 3 Relation Modeling with Dihedral Group and Expressiveness We propose to model the relations by the group elements in DK. Like ComplEx (Trouillon et al., 2016), we assume an even number of latent dimensions 2L. More specifically, the relation matrix takes a block diagonal form R = diag  R(1), R(2), · · · , R(L) where R(l) ∈DK for l ∈{1, 2, · · · , L}. The corresponding embedding vectors h ∈R2L and t ∈R2L take the form of  h(1), · · · , h(L) and  t(1), · · · , t(L) where h(l), t(l) ∈R2 respectively. As a result, the score for a triple (h, r, t) in bilinear form can be written as a sum of these L components h⊤Rt = PL l=1 h(l)⊤R(l)t(l), We name the model DihEdral because each component R(l) is a representation matrix of a dihedral group element. Lemma 1. The relation matrix R of DihEdral is orthogonal, i.e. RR⊤= R⊤R = I. Lemma 2. The score of (h, r, t) satisfies h⊤Rt = −1 2  R⊤h −t 2 2 −h⊤h −t⊤t  , consequently maximizing score w.r.t. R is equivalent to minimizing R⊤h −t 2 2. Theorem 1. The relations matrices in DihEdral form a group under matrix multiplication. Though its relation embedding takes discrete values, DihEdral is fully expressive as it is able to model relations with desired properties for each component Rl by the corresponding matrices in 1There are more than one 2D representations for the dihedral group DK, and we use the orthogonal representation throughout the paper. Check Steinberg 2012 for details. DK. The properties are summarized in Table 1, with comparison to DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), ANALOGY (Liu et al., 2017) and SimplE (Kazemi and Poole, 2018). 2 The details of expressiveness are described as follows. For notation convenience, we denote T + all the possible true triples, and T −all the possible false triples. Symmetric A relation r is symmetric iff (h, r, t) ∈T + ⇔(t, r, h) ∈T +. Symmetric relations in the real world include synonym, similar_to. Note that with DihEdral, the component Rl can be a reflection matrix which is symmetric and offdiagonal. This is in contrast to DistMult and ComplEx where the relation matrix has to be diagonal when it is symmetric at the same time. Skew-Symmetric A relation r is skew-symmetric iff (h, r, t) ∈T + ⇔(t, r, h) ∈T −. Skewsymmetric relations in the real world include father_of, member_of. When K is a multiple of 4, pure skew-symmetric matrices in D4 can be chosen. As a result, the relation is guaranteed to be skew-symmetric satisfying φ(h, r, t) = −φ(t, r, h). Inversion r2 is the inverse of r1 iff (h, r1, t) ∈ T + ⇔(t, r2, h) ∈T +. As a real world example, parent_of is the inversion of child_of. The inverse of the relation r is represented by R−1 in an ideal situation: For two positive triples (h, r1, t) and (t, r2, h), we have R⊤ 1 h ≈t and R⊤ 2 t ≈h in an ideal situation (cf. Lemma 2), With enough occurrences of pair {h, t} we have R2 = R−1 1 . Composition r3 is composition of r1 and r2, denoted as r3 = r1 ⊙r2 iff (h, r1, m) ∈ T + ∧(m, r2, t) ∈T + ⇔(h, r3, t) ∈T +. Example of composition in the real world includes nationality = born_in_city ⊙ city_belong_to_nation. Depending on the commutative property, there are two cases of relation compositions: • Abelian r1 and r2 are Abelian if (h, r1 ⊙r2, t) ∈T + ⇔(h, r2 ⊙r1, t) ∈ T +. Real world example includes 2Note that the condition listed in the table is sufficient but not necessary for the desired property. 266 Component Symmetric Skew-Symmetric Composition Abelian Non-Abelian DistMult ri ∈R ✓ ?∗ ✓ NA† ComplEx  ai −bi bi ai  bi = 0 ai = 0 ✓ NA† ANALOGY  ai −bi bi ai  ∪{cj} bi = 0 ai, cj = 0 ✓ NA† SimplE  0 ai bi 0  ai = bi ai = −bi NA† DihEdral DK F (m) K ∪O(0,K/2) K O(K/4,3K/4) K both in O(m) K either in F (m) K Table 1: Comparison on expressiveness for bilinear KB models. ‘NA’ stands for ‘not available’, and ‘✓’ stands for ‘always’. ∗DistMult has no skew-symmetric relation representations but it performs well in benchmark datasets because the entity type of head and tails are different. † The contents in column ‘Composition’ are subject to the assumption that relation composition corresponds the multiplication of the relation representation. We are not certain if there are other composition rules with which these properties are satisfied. opposite_gender ⊙ profession = profession ⊙opposite_gender. • Non-Abelian r1 and r2 are non-Abelian if (h, r1 ⊙r2, t) ∈ T + ⇎ (h, r2 ⊙ r1, t) ∈ T +. Real world example include parent_of ⊙spouse_of ̸= spouse_of ⊙parent_of. In DihEdral, the relation composition operator ⊙corresponds to the matrix multiplication of the corresponding representations, i.e. R3 ≈ R1R2. Consider three positive triples (h, r1, m), (m, r2, t) and (h, r3, t). In the ideal situation, we have R⊤ 1 h ≈m, R⊤ 2 m ≈t, R⊤ 3 h ≈t (cf. Lemma 2), and further R⊤ 2 R⊤ 1 h ≈t. With enough occurrences of such {h, t} pairs in the training dataset, we have R3 ≈R1R2. Note that although all the rotation matrices form a subgroup to dihedral group, and hence algebraically closedthe rotation subgroup could not model non-Abelian relations. To model nonAbelian relation compositions at least one reflection matrix should be involved. 4 Training In the standard traing framework for KG embedding models, parameters Θ = ΘE ∪ΘR, i.e. the union of entity and relation embeddings, are learnt by stochastic optimization methods. For each minibatch of positive triples, a small number of negative triples are sampled by corrupting head or tail for each positive triple, then related parameters in the model are updated by minimizing the binary negative log-likelihood such that positive triples will get higher scores than negative triples. Specifically, the loss function is written as follows, min Θ X (h,r,t)∈T +∪T − −log σ (yφ(h, r, t))+λ||ΘE||2, (2) where λ ∈R is the L2 regularization coefficient for entity embeddings only, T + and T −are the sets of positive and sampled negative triples in a minibatch, and y equals to 1 if (h, r, t) ∈T + otherwise −1. σ is a sigmoid function defined as σ(x) = 1/(1 + exp(−x)). Special treatments of the relation representations R are required as they takes discrete values. In the next subsections we describe a reparametrization method for general K, followed by a simple approach when K takes small integers values. With these treatments, DihEdral could be trained within the standard framework. 4.1 Gumbel-Softmax Approach Each relation component R(l) can be parametrized with a one-hot variable c(l) ∈{0, 1}2K encoding 2K choices of matrices in DK: R(l) = P2K k=1 c(l) k Dk where {Dk, k ∈ {1, · · · , 2K}} enumerates DK. The number of parameters for each relation is 2LK in this approach. One-hot variable c(l) is further parametrized by s(l) ∈R2K by Gumbel trick (Jang et al., 2017) with the following steps: 1) take i.i.d. samples q1, q2, . . . , q2K from a Gumbel distribution: qi = −log(−log ui), where ui ∼U(0, 1) are samples from a uniform distribution; 2) use log-softmax 267 form of s(l) to parametrize c(l) ∈{0, 1}2K: c(l) k = exp h (s(l) k + qk)/τ i P2K k=1 exp h (s(l) k + qk)/τ i (3) where τ is the tunable temperature. During training, we start with high temperature, e.g. τ0 = 3, to drive the system out of pool local minimums, and gradually cool the system with τ = max(0.5, τ0 exp(−0.001t)) where t is the number of epochs elapsed. 4.2 Reparametrization with Binary Variables Another parametrization technique for DK where K ∈{4, 6} is to parametrize each element in the matrix R(l) directly. Specifically we have R(l) = " λ −αγ γ αλ # , where λ = cos(2πk/K), γ = sin(2πk/K), k ∈ {0, 1, · · · , 2K −1} and α ∈{−1, 1} is the reflection indicator . Both λ and γ can be parametrized by the same set of binary variables {x, y, z}: λ = ( (x + y)/2 K = 4 y(3 −x)/4 K = 6 , γ = ( (x −y)/2 K = 4 z(x + 1) √ 3/4 K = 6 . In the forward pass, each binary variable b ∈ {x, y, z} is parametrized by taking a element-wise sign function of a real number: b = sign(breal) where breal ∈R. In the backward pass, since the original gradient of sign function is almost zero everywhere such that breal will not be activated, the gradient of loss with respect to the real variable is estimated with the straight-through estimator (STE) (Yin et al., 2019). The functional form for STE is not unique and worth profound theoretical study. In our experiments, we used identity STE (Bengio et al., 2013): ∂loss ∂breal = ∂loss ∂b 1, where 1 stands for element-wise identity. For these two approaches, we name the model as DK-Gumbel for Gumbel-Softmax approach and DK-STE for reparametrization using binary variable approach. 5 Experimental Result This section presents our experiments and results. We first introduce the benchmark datasets used in our experiments, after that we evaluate our approach in the link prediction task. 5.1 Datasets Introduced in Bordes et al. (2013), WN18 and FB15K are popular benchmarks for link prediction tasks. WN18 is a subset of the famous WordNet database that describes relations between words. In WN18 the most frequent types of relations form reversible pairs (e.g., hypernym to hyponym, part_of to has_part). FB15K is a subsampling of Freebase limited to 15k entities, introduced in Bordes et al. (2013). It contains triples with different characteristics (e.g., one toone relations such as capital_of to many-tomany such as actor_in_film). YAGO3-10 (Dettmers et al., 2018) is a subset of YAGO3 (Suchanek et al., 2007) with each entity contains at least 10 relations. As noted in Toutanova et al. (2015); Dettmers et al. (2018), in the original WN18 and FB15k datasets there are a large amount of test triples appear as reciprocal form of the training samples, due to the reversible relation pairs. Therefore, these authors eliminated the inverse relations and constructed corresponding subsets: WN18RR with 11 relations and FB15K-237 with 237 relations, both of which are free from test data leak. All datasets statistics are shown in Table 2. Dataset |E| |R| Train Valid Test WN18 41k 18 141k 5k 5k WN18RR 41k 11 87k 3k 3k FB15K 15k 1.3k 483k 50k 59k FB15K-237 15k 237 273k 18k 20k YAGO3-10 123k 37 1M 5k 5k Table 2: Statistics of Datasets. 5.2 Evaluation Metric We use the popular metrics filtered HITS@1, 3, 10 and mean reciprocal rank (MRR) as our evaluation metrics as in Bordes et al. (2013). 5.3 Model Selection and Hyper-parameters We implemented DihEdral in PyTorch (Paszke et al., 2017). In all our experiments, we selected the hyperparameters of our model in a grid search setting for the best MRR in the validation set. We 268 WN18 FB15K HITS@N MRR HITS@N MRR 1 3 10 1 3 10 TransE† (Bordes et al., 2013) 8.9 82.3 93.4 45.4 23.1 47.2 64.1 22.1 DistMult† (Yang et al., 2015) 72.8 91.4 93.6 82.2 54.6 73.3 82.4 65.4 ComplEx† (Trouillon et al., 2016) 93.6 94.5 94.7 94.1 59.9 75.9 84.0 69.2 HolE (Nickel et al., 2016) 93.0 94.5 94.7 93.8 40.2 61.3 73.9 52.4 ANALOGY (Liu et al., 2017) 93.9 94.4 94.7 94.2 64.6 78.5 85.4 72.5 Single DistMult (Kadlec et al., 2017) — — 94.6 79.7 — — 89.3 79.8 SimplE (Kazemi and Poole, 2018) 93.9 94.4 94.7 94.2 66.0 77.3 83.8 72.7 R-GCN (Schlichtkrull et al., 2018) 69.7 92.9 96.4 81.9 60.1 76.0 84.2 69.6 ConvE (Dettmers et al., 2018) 93.5 94.6 95.6 94.3 55.8 72.3 83.1 65.7 D4-STE 94.2 94.8 95.2 94.6 64.1 80.3 87.7 73.3 D4-Gumbel 94.2 94.9 95.4 94.6 64.8 78.2 86.4 72.8 Table 3: Link prediction results on WN18 and FB15K datasets. Results marked by ‘†’ are taken from (Trouillon et al., 2016), and the rest of the results are taken from original literatures. trained DK-Gumbel for K ∈{4, 6, 8} and DKSTE for K ∈{4, 6} with AdaGrad optimizer (Duchi et al., 2011), and we didn’t notice significant difference in terms of the evaluation metrics when varying K. In the following we only report the result for K = 4. For D4-Gumbel, we performed grid search for the L2 regularization coefficient λ ∈[10−5, 10−4, 10−3] and learning rate ∈[0.5, 1]. For D4-STE, hyperparamter ranges for the grid search were as follows: λ ∈[0.001, 0.01, 0.1, 0.2], learning rate ∈[0.01, 0.02, 0.03, 0.05, 0.1]. For both settings we performed grid search with batch sizes ∈[512, 1024, 2048] and negative sample ratio ∈[1, 6, 10]. We used embedding dimension 2L = 1500 for FB15K, 2L = 600 for both FB15K-237 and YAGO3-10, 2L = 200 for WN18 and WN18RR. We used the standard train/valid/test splits provided with these datasets. The results of link predictions are shown in Table 3 and 4, where the results for the baselines are directly taken from original literature. DihEdral outperforms almost all models in bilinear form, and even ConvE in FB15K, WN18RR and YAGO3-10. The result demonstrates that even DihEdral takes discretized value in relation representations, proper modeling the underlying structure of relations using DK is essential. 6 Case Studies The learned representation from DihEdral is not only able to reach the state-of-the-art performance in link prediction tasks, but also provides insights with its special properties. In this section, we present the detailed case studies on these properties. In order to achieve better resolutions, we increased the embedding dimension to 2L = 600 for WN18 datasets. −1.0 −0.5 0.0 0.5 1.0 0 200 400 _part_of _has_part percage of 1s: 0.93 −1.0 −0.5 0.0 0.5 1.0 0 200 400 _member_of_domain_usage _synset_domain_usage_of percage of 1s: 0.68 −1.0 −0.5 0.0 0.5 1.0 0 500 1000 act_in_film starring percage of 1s: 0.91 −1.0 −0.5 0.0 0.5 1.0 0 500 1000 people_born_here place_of_birth percage of 1s: 0.86 Figure 2: Relation inversion in WN18 (top) and FB15K (bottom). Each subplot shows the histogram of diagonal elements in R1R2 where r1 is inverse relation of r2. The name of the two relations and the percentage of the 1s in the diagonal are shown in the first, second and third row of the subplot title, respectively. 6.1 Inversion We show the multiplication of some pairs of inversion relations on WN18 and FB15K in Figure 2, 269 WN18RR FB15K-237 YAGO3-10 HITS@N MRR HITS@N MRR HITS@N MRR 1 3 10 1 3 10 1 3 10 DistMult† 39.0 44.0 49.0 43.0 15.5 26.3 41.9 24.1 24.0 38.0 54.0 34.0 ComplEx† 41.0 46.0 51.0 44.0 15.8 27.5 42.8 24.7 26.0 40.0 55.0 36.0 R-GCN — — — — 15.1 26.4 41.7 24.8 — — — — ConvE† 40.0 44.0 52.0 43.0 23.7 35.6 50.1 32.5 35.0 49.0 62.0 44.0 MINERVA∗ 41.3 45.6 51.3 44.8 21.7 32.9 45.6 29.3 — — — — D4-STE 45.2 49.1 53.6 48.0 23.0 35.3 50.2 32.0 38.1 52.3 64.3 47.2 D4-Gumbel 44.2 50.5 55.7 48.6 20.4 33.2 49.6 30.0 29.4 43.6 57.3 38.8 Table 4: Link prediction results on WN18RR and FB15K-237 datasets. Results marked by ‘†’ are taken from (Dettmers et al., 2018), and result marked by ‘∗’ is taken from (Das et al., 2018). and the result is close to an identity matrix. For the relation pair {_member_of_domain_usage, _synset_domain_usage_of}, the multiplication deviates from ideal identity matrix as the performance for these two relations are poorer compared to the others. We also repeat the same case study for other bilinear embedding methods, however their multiplications are not identity, but close to diagonal matrices with different elements. O(0) 4 O(1) 4 O(2) 4 O(3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 25 50 _verb_group O(0) 4 O(1) 4 O(2) 4 O(3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 25 50 _similar_to Symmetric Relations O(0) 4 O(1) 4 O(2) 4 O(3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 50 100 _hyponym O(0) 4 O(1) 4 O(2) 4 O(3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 25 50 75 100 _instance_hypernym O(0) 4 O(1) 4 O(2) 4 O(3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 20 40 60 80 _synset_domain_region_of O(0) 4 O(1) 4 O(2) 4 O(3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 20 40 _synset_domain_topic_of Skew-Symmetric Relations Figure 3: Historgram of each component of D4 for WN18. The top and bottom row corresponds to symmetric and skew-symmetric relations, respectively. Note that O(1,3) 4 are skew-symmetric components and others are symmetric. 6.2 Symmetry and Skew-Symmetry Since the KB datasets do not contain negative triples explicitly, there is no penalty to model skew-symmetric relations with symmetric matrices. This is perhaps the reason why DistMult performs well on FB15K dataset in which a lot of relations are skew-symmetric. To resolve this ambiguity, for each positive triple (h, r, t) with a definite skew-symmetric relation r, a negative triple (t, r, h) is sampled with probability 0.5. After adding this new negative sampling scheme in D4-Gumbel, the symmetric and skew-symmetric relations can be distinguished on WN18 dataset without reducing performance on link prediction tasks. Figure 3 shows that both symmetric and skew-symmetric relations favor corresponding components in D4 as expected. Again, due to imperfect performance of _synset_domain_topic_of, its corresponding representation is imperfect as well. We also conduct the same experiment without adding this sampling scheme, the histogram for the symmetric relations are similar, but there is no strong preference for skew-symmetric relations. 6.3 Relation Composition In FB15K-237 dataset the majority of patterns is relation composition. However, these compositions are Abelian only because all the inverse relations are filtered out on purpose. To justify if non-Abelian relation compositions can be discovered by DihEdral in an ideal situation, we generate a synthetic dataset called FAMILY. Specifically, we first generated two generations of people with equal number of male and females in each generation, and randomly assigned spouse edges within each generation and child and parent edges between the two generations, after which the sibling, parent_in_law and 270 −1.0 −0.5 0.0 0.5 1.0 0 200 400 people_is_nominated_award award_is_nominated_to_work people_winning_work percage of 1s: 0.96 −1.0 −0.5 0.0 0.5 1.0 0 100 200 300 film_directed_by_director director_direct_film film_is_prequel_of_film percage of 1s: 0.94 −1.0 −0.5 0.0 0.5 1.0 0 50 100 150 m_ishusband_f f_hasdaughter_f m_hasdaughter_f percage of 1s: 0.86 −1.0 −0.5 0.0 0.5 1.0 0 50 100 150 f_motherlaw_f f_hasson_m f_iswife_m percage of 1s: 0.90 −1.0 −0.5 0.0 0.5 1.0 0 50 100 f_hasdaughter_f m_ishusband_f m_hasdaughter_f percage of 1s: 0.54 −1.0 −0.5 0.0 0.5 1.0 0 50 100 f_hasson_m f_motherlaw_f f_iswife_m percage of 1s: 0.58 Figure 4: Relation composition on FB15K-237 and FAMILY. Each subplot shows the histogram of diagonal elements in R1R2R−1 3 where r3 is treated as the composition of r1 and r2. The name of the three relations and the percentage of the 1s in the diagonal are shown in the 1st, 2nd, 3rd and 4th line of subplot title. The two subplots in the first rows shows composition for FB15K-237, and subplots on the second and third row are used to check composition and non-Abelian on FAMILY. child_in_law edges are connected based on commonsense logic. We trained D4-Gumbel on FAMILY with latent dimension 2L = 400. In addition to the loss in Eq. 2, we add the following regularization term to encourage the score of positive triple to be higher than that of negative triple for each component independently. − L X l=1 log σ  h(l)⊤R(l)t(l) −h∗(l)⊤R(l)t∗(l) . where (h, r, t) ∈T +, and the corresponding negative triple (h∗, r, t∗) ∈T −. For each composition r3 = r1⊙r2, we compute the histogram of R1R2R−1 3 . The result for relation compositions in FB15K-237 and FAMILY is shown in Figure 4, from which we could see good composition as matrix multiplication. We also reveal the non-Abelian property in FAMILY by exchanging the order of r1 and r2. 7 Related Works In this section we discuss the related works and their connections to our approach. TransE (Bordes et al., 2013) takes relations as a translating operator between head and tail entities. More complicated distance functions (Wang et al., 2014; Lin et al., 2015b,a) are also proposed as extensions to TransE. TorusE (Ebisu and Ichise, 2018) proposed a novel distance function defined over a torus by transform the vector space by an Abelian group onto a n-dimensional torus. ProjE (Shi and Weninger, 2017) designs a neural network with a combination layer and a projection layer. R-GCN (Schlichtkrull et al., 2018) employs convolution over multiple entities to capture spectrum of the knowledge graph. ConvE (Dettmers et al., 2018) performs 2D convolution on the concatenation of entity and relation embeddings, thus by nature introduces non-linearity to enhance expressiveness. In RESCAL (Nickel et al., 2011) each relation is represented by a full-rank matrix. As a downside, there is a huge number of parameters in RESCAL making the model prone to overfitting. A totally symmetric DistMult (Yang et al., 2015) model simplifies RESCAL by representing each relation with a diagonal matrix. To parametrize skewsymmetric relations, ComplEx (Trouillon et al., 2016) extends DistMult by using complex-valued instead of real-valued vectors for entities and relations. The representation matrix of ComplEx supports both symmetric and skew-symmetric relations while being closed under matrix multiplication. HolE (Nickel et al., 2016) models the skewsymmetry with circular correlation between entity embeddings, thus ensures shifts in covariance between embeddings at different dimensions. It was recently showed that HolE is isomophic to ComplEx (Hayashi and Shimbo, 2017). ANALOGY (Liu et al., 2017) and SimplE (Kazemi and Poole, 2018) both reformulate the tensor decomposition approach in light of analogical and reversible relations. Though embedding based approach achieves state-of-the-art performance on link prediction task, symbolic relation composition is not explicitly modeled. In contrast, the latter goal is currently popularized by directly modeling the reasoning paths (Neelakantan et al., 2015; Xiong et al., 2017; Das et al., 2018; Lin et al., 2018; Guo et al., 2019). As paths are consistent with rea271 soning logic structure, non-Abelian composition is supported by nature. DihEdral is more expressive when compared to other bilinear form based embedding methods such as DistMult, ComplEX and ANALOGY. As the relation matrix is restricted to be orthogonal, DihEdral could bridge translation based and bilinear form based approaches as the training objective w.r.t. the relation matrix is similar (cf Lemma 2). Besides, DihEdral is the first embedding method to incorporate non-Abelian relation compositions in terms of matrix multiplications (cf. Theorem 1). 8 Conclusion This paper proposed DihEdral for KG relation embedding. By leveraging the desired properties of dihedral group, relation (skew-) symmetry, inversion, and (non-) Abelian compositions are all supported. Our experimental results on benchmark KGs showed that DihEdral outperforms existing bilinear form models and even deep learning methods. Finally, we demonstrated that the above g properties can be learned from DihEdral by extensive case studies, yielding a substantial increase in interpretability from existing models. Acknowledgments The authors would like to thank Vivian Tian, Hua Yang, Steven Li and Xiaoyuan Wu for their supports, and anonymous reviewers for their helpful comments. References Yoshua Bengio, Nicholas L´eonard, and Aaron C. Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of EMNLP. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In Proceedings of EMNLP. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proceedings of NeurIPs. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, and Andrew McCallum. 2018. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. In Proceedings in ICLR. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of AAAI. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159. Takuma Ebisu and Ryutaro Ichise. 2018. TorusE: Knowledge graph embedding on a lie group. In Proceedings of AAAI. Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In Proceedings of ICML. Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of EMNLP. Katsuhiko Hayashi and Masashi Shimbo. 2017. On the equivalence of holographic and complex embeddings for link prediction. In Proceedings of ACL. He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In Proceedings of ACL. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In Proceedings of ICLR. Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Seyed Mehran Kazemi and David Poole. 2018. SimplE embedding for link prediction in knowledge graphs. In Proceedings of NeurIPs. Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In Proceedings in EMNLP. Yankai Lin, Zhiyuan Liu, Huan-Bo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling relation paths for representation learning of knowledge bases. In Proceedings of EMNLP. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of AAAI. Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical inference for multi-relational embeddings. In Proceedings of ICML. 272 Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. In Proceedings of ACL. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of AAAI. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of ICML. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Proceedings of NIPS Autodiff Workshop. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. arXiv preprint arXiv:1703.06103. Baoxu Shi and Tim Weninger. 2017. ProjE: Embedding projection for knowledge graph completion. In Proceedings of AAAI. Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Proceedings of NeurIPs. Benjamin Steinberg. 2012. Representation Theory of Finite Groups. Springer-Verlag New York. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A core of semantic knowledge. In Proceedings of WWW. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of EMNLP. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of ICML. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of AAAI. Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. DeepPath: A reinforcement learning method for knowledge graph reasoning. In Proceedings in EMNLP. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of ICLR. Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley J. Osher, Yingyong Qi, and Jack Xin. 2019. Understanding straight-through estimator in training activation quantized neural nets. In Proceedings of ICLR.
2019
26
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2704–2713 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2704 Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, Bowen Zhou JD AI Research {ming.tu,guangtao.wang,jing.huang,yun.tang,xiaodong.he,bowen.zhou}@jd.com Abstract Multi-hop reading comprehension (RC) across documents poses new challenge over singledocument RC because it requires reasoning over multiple documents to reach the final answer. In this paper, we propose a new model to tackle the multi-hop RC problem. We introduce a heterogeneous graph with different types of nodes and edges, which is named as Heterogeneous Document-Entity (HDE) graph. The advantage of HDE graph is that it contains different granularity levels of information including candidates, documents and entities in specific document contexts. Our proposed model can do reasoning over the HDE graph with nodes representation initialized with co-attention and self-attention based context encoders. We employ Graph Neural Networks (GNN) based message passing algorithms to accumulate evidences on the proposed HDE graph. Evaluated on the blind test set of the Qangaroo WIKIHOP data set, our HDE graph based single model delivers competitive result, and the ensemble model achieves the state-of-the-art performance. 1 Introduction Being able to comprehend a document and output correct answer given a query/question about content in the document, often referred as machine reading comprehension (RC) or question answering (QA), is an important and challenging task in natural language processing (NLP). Plenty of data sets have been constructed to facilitate research on this topic, such as SQuAD (Rajpurkar et al., 2016, 2018), NarrativeQA (Koˇcisk`y et al., 2018) and CoQA (Reddy et al., 2018). Many neural models have been proposed to tackle the machine RC/QA problem (Seo et al., 2016; Xiong et al., 2016; Tay et al., 2018), and great success has been achieved, especially after the release of the BERT (Devlin et al., 2018). Query: record label get ready Support doc 1: Mason Durell Betha (born August 27, 1977), better known by stage name Mase (formerly often stylized Ma$e or MA$E), is an American hip hop recording artist and minister. He is best known for being signed to Sean “Diddy” Combs’s label Bad Boy Records. . . . Support doc 2: “Get Ready” was the only single released from Mase’s second album, Double Up. It was released on May 25, 1999, produced by Sean “Puffy” Combs, Teddy Riley and Andreao “Fanatic” Heard and featured R&B group, Blackstreet, it contains a sample of “A Night to Remember”, performed by Shalamar. . . . Support doc 3: Bad Boy Entertainment (also known as Bad Boy Records) is an American record label founded in 1993 by Sean Combs. . . . Candidates: bad boy records, record label, rock music, . . . Answer: bad boy records Figure 1: A WIKIHOP example. Words with different colors indicate the evidences across documents. However, current research mainly focuses on machine RC/QA on a single document or paragraph, and still lacks the ability to do reasoning across multiple documents when a single document is not enough to find the correct answer. To promote the study for multi-hop RC over multiple documents, two data sets are recently proposed: WIKIHOP (Welbl et al., 2018) and HotpotQA (Yang et al., 2018). These two data sets require multi-hop reasoning over multiple supporting documents to find the answer. In Figure 1, we show an excerpt from one sample in WIKIHOP development set to illustrate the need for multi-hop reasoning. Two types of approaches have been proposed on the multi-hop multi-document RC problem. The first is based on previous neural RC models. The earliest attempt in (Dhingra et al., 2018) concatenated all supporting documents and designed a recurrent layer to explicitly exploit the skip connections between entities given automatically gener2705 ated coreference annotations. Adding this layer to the neural RC models improved performance on multi-hop tasks. Recently, an attention based system (Zhong et al., 2019) utilizing both documentlevel and entity-level information achieved stateof-the-art results on WIKIHOP data set, proving that techniques like co-attention and self-attention widely employed in single-document RC tasks are also useful in multi-document RC tasks. The second type of research work is based on graph neural networks (GNN) for multi-hop reasoning. The study in Song et al. (2018) adopted two separate name entity recognition (NER) and coreference resolution systems to locate entities in support documents. Those entities serve as nodes in GNN to enable multi-hop reasoning across documents. Work in De Cao et al. (2018) directly used mentions of candidates (found in documents by simple exact matching strategy) as GNN nodes and calculate classification scores over mentions of candidates. In this paper, we propose a new method to solve the multi-hop RC problem across multiple documents. Inspired by the success of GNN based methods (Song et al., 2018; De Cao et al., 2018) for multi-hop RC, we introduce a new type of graph, called Heterogeneous Document-Entity (HDE) graph. Our proposed HDE graph has the following advantages: • Instead of graphs with single type of nodes (Song et al., 2018; De Cao et al., 2018), the HDE graph contains different types of queryaware nodes representing different granularity levels of information. Specifically, instead of only entity nodes as in (Song et al., 2018; De Cao et al., 2018), we include nodes corresponding to candidates, documents and entities. In addition, following the success of Coarse-grain Fine-grain Coattention (CFC) network (Zhong et al., 2019), we apply both co-attention and self-attention to learn queryaware node representations of candidates, documents and entities; • The HDE graph enables rich information interaction among different types of nodes thus facilitate accurate reasoning. Different types of nodes are connected with different types of edges to highlight the various structural information presented among query, document and candidates. Through ablation studies, we show the effectiveness of our proposed HDE graph for multihop multi-document RC task. Evaluated on the blind test set of WIKIHOP, our proposed endto-end trained single neural model beats the current published state-of-the-art results in (Zhong et al., 2019) and is the 2nd best model on the WIKIHOP leaderboard. Meanwhile, our ensemble model ranks 1st place on the WIKIHOP leadrboard and surpasses the human performance (as reported in (Welbl et al., 2018)) on this data set by 0.2% 1. This is achieved without using pretrained contextual ELMo embedding (Peters et al., 2018). 2 Related Work The study presented in this paper is directly related to existing research on multi-hop reading comprehension across multiple documents (Dhingra et al., 2018; Song et al., 2018; De Cao et al., 2018; Zhong et al., 2019; Kundu et al., 2018). The method presented in this paper is similar to previous studies using GNN for multi-hop reasoning (Song et al., 2018; De Cao et al., 2018). Our novelty is that we propose to use a heterogeneous graph instead of a graph with single type of nodes to incorporate different granularity levels of information. The co-attention and self-attention based encoding of multi-level information presented in each input is also inspired by the CFC model (Zhong et al., 2019) because they show the effectiveness of attention mechanisms. Our model is very different from the other two studies (Dhingra et al., 2018; Kundu et al., 2018): these two studies both explicitly score the possible reasoning paths with extra NER or coreference resolution systems while our method does not require these modules and we do multi-hop reasoning over graphs. Besides these studies, our work is also related to the following research directions. Multi-hop RC: There exist several different data sets that require reasoning in multiple steps in literature, for example bAbI (Weston et al., 2015), MultiRC (Khashabi et al., 2018) and OpenBookQA (Mihaylov et al., 2018). A lot of systems have been proposed to solve the multi-hop RC problem with these data sets (Sun et al., 2018; Wu et al., 2019). However, these data sets require multi-hop reasoning over multiple sentences or multiple common knowledge while the problem 1By May 30th 2019, http://qangaroo.cs.ucl. ac.uk/leaderboard.html 2706 we want to solve in this paper requires collecting evidences across multiple documents. GNN for NLP: Recently, there is considerable amount of interest in applying GNN to NLP tasks and great success has been achieved. For example, in neural machine translation, GNN has been employed to integrate syntactic and semantic information into encoders (Bastings et al., 2017; Marcheggiani et al., 2018); Zhang et al. (2018) applied GNN to relation extraction over pruned dependency trees; the study by Yao et al. (2018) employed GNN over a heterogeneous graph to do text classification, which inspires our idea of the HDE graph; Liu et al. (2018) proposed a new contextualized neural network for sequence learning by leveraging various types of non-local contextual information in the form of information passing over GNN. These studies are related to our work in the sense that we both use GNN to improve the information interaction over long context or across documents. 3 Methodology In this section, we describe different modules of the proposed Heterogeneous Document-Entity (HDE) graph-based multi-hop RC model. The overall system diagram is shown in Figure 2. Our model can be roughly categorized into three parts: initializing HDE graph nodes with co-attention and self-attention based context encoding, reasoning over HDE graph with GNN based message passing algorithms and score accumulation from updated HDE graph nodes representations. 3.1 Context encoding Given a query q with the form of (s, r, ?) which represents subject, relation and unknown object respectively, a set of support documents Sq and a set of candidates Cq, the task is to predict the correct answer a∗to the query. To encode information including in the text of query, candidates and support documents, we use a pretrained embedding matrix (Pennington et al., 2014) to convert word sequences to sequences of vectors. Let Xq ∈ Rlq×d, Xi s ∈Rli s×d and Xj c ∈Rlj c×d represent the embedding matrices of query, i-th supporting document and j-th candidate of a sample, where lq, li s and lj c are the numbers of words in query, i-th supporting document and j-th candidate respectively. d is the dimension of the word embedding. We use bidirectional recurrent neural networks (RNN) Candidates Query Documents encoder encoder encoder C S coattn coattn coattn Entity extraction Self-attn Self-attn Self-attn Cand nodes Doc nodes Entity nodes Entityscores FC FC Final scores Cand score 1 Cand scores 2 Figure 2: System diagram. S and C are the number of support documents and candidates respectively. We use yellow nodes to represent query-aware candidate representation, blue nodes to represent extracted queryaware entity representation and green nodes to represent query-aware document representation. with gated recurrent unit (GRU) (Cho et al., 2014) to encode the contextual information present in the query, supporting documents and candidates separately. The output of query, document and candidate encoders are Hq ∈Rlq×h, Hi s ∈Rli s×h and Hj c ∈Rlj c×h. h denotes the output dimension of RNN encoders. Entity extraction: entities play an import role in bridging multiple documents and connecting a query and the corresponding answer as shown in figure 1. For example, the entity “get ready” in query and two entities “Mase” and “Sean Combs” co-occur in the 2nd support document, and both “Mase” and “Sean Combs” can lead to the correct answer “bad boy records”. Based on this observation, we propose to extract mentions of both query subject s and candidates Cq from documents. We will show later that by including mentions of query subject the performance can be improved. We use simple exact match strategy (De Cao et al., 2018; 2707 Zhong et al., 2019) to find the locations of mentions of query subject and candidates, i.e. we need the start and end positions of each mention. Each mention is treated as an entity. Then, representations of entities can be taken out from the i-th document encoding Hi s. We denote an entity’s representation as M ∈Rlm×h where lm is the length of the entity. Co-attention: Co-attention has achieved great success for single document reading comprehension tasks (Seo et al., 2016; Xiong et al., 2016), and recently was applied to multiple-hop reading comprehension (Zhong et al., 2019). Coattention enables the model to combine learned query contextual information attended by document and document contextual information attended by query, with inputs of one query and one document. We follow the implementation of coattention in (Zhong et al., 2019). We use the co-attention between a query and a supporting document for illustration. Same operations can be applied to other documents, or between the query and extracted entities. Given RNN-encoded sequences of the query Hq ∈ Rlq×h and a document Hi s ∈Rli s×h, the affinity matrix between the query and document can be calculated as Ai qs = Hi s(Hq)⊺∈Rli s×lq, (1) where ⊺denotes matrix transpose. Each entry of the matrix Ai qs indicates how related two words are, one from the query and one from the document. For simplification, in later context, we ignore the superscript i which indicates the operation on the i-th document. Next we derive the attention context of the query and document as follows: Cq = softmax(A⊺ qs)Hs ∈Rlq×h, (2) Cs = softmax(Aqs)Hq ∈Rls×h. (3) softmax(·) denotes column-wise normalization. We further encode the co-attended document context using a bidirectional RNN f with GRU: Ds = f(softmax(Aqs)Cq) ∈Rls×h. (4) The final co-attention context is the columnwise concatenation of Cs and Ds: Sca = [Cs; Ds] ∈Rls×2h. (5) We expect Sca carries query-aware contextual information of supporting documents as shown by Zhong et al. (2019). The same co-attention module can also be applied to query and candidates, and query and entities (as shown in Figure 2) to get Cca and Eca. Note that we do not do coattention between query and entities corresponding to query subject because query subject is already a part of the query. To keep the dimensionality consistent, we apply a single-layer multi-layer perceptron (MLP) with tanh activation function to increase the dimension of the query subject entities to 2h. Self-attentive pooling: while co-attention yields a query-aware contextual representation of documents, self-attentive pooling is designed to convert the sequential contextual representation to a fixed dimensional non-sequential feature vector by selecting important query-aware information (Zhong et al., 2019). Self-attentive pooling summarizes the information presented in the coattention output by calculating a score for each word in the sequence. The scores are normalized and a weighted sum based pooling is applied to the sequence to get a single feature vector as the summarization of the input sequence. Formally, the self-attention module can be formulated as the following operations given Sca as input: as = softmax(MLP(Sca)) ∈Rls×1, (6) ssa = a⊺ sSca ∈R1×2h, (7) where MLP(·) is a two-layer MLP with tanh as activation function. Similarly, after self-attentive pooling, we can get csa and esa for each candidate and entity. Our context encoding module is different from the one used in Zhong et al. (2019) in following aspects: 1) we compute the co-attention between query and candidates which is not presented in the CFC model. 2) For entity word sequences, we first calculate co-attention with query and then use selfattention to summarize each entity word sequence while Zhong et al. (2019) first do self-attention on entity word sequences to get a sequence of entity vectors in each documents. Then, they apply coattention with query. 3.2 Reasoning over HDE graph Graph building: let a HDE graph be denoted as G = {V, E}, where V stands for node representations and E represents edges between nodes. In 2708 our proposed HDE graph based model, we treat each document, candidate and entity extracted from documents as nodes in the HDE graph, i.e., each document (candidate/entity) corresponds to one node in the HDE graph. These nodes represent different granularity levels of query-aware information: document nodes encode documentlevel global information regarding to the query; candidate nodes encode query-aware information in candidates; entity nodes encode query-aware information in specific document context or the query subject. The HDE graph is built to enable graph-based reasoning. It exploits useful structural information among query, support documents and candidates. We expect our HDE graph could perform multi-hop reasoning to locate the answer nodes or entity nodes of answers given a query. Self-attentive pooling generates vector representations for each candidate, document and entity, which can be directly employed to initialize the node representations V. For edge connections E, we define the following types of edges between pairs of nodes to encode various structural information in the HDE graph: 1. an edge between a document node and a candidate node if the candidate appear in the document at least one time. 2. an edge between a document node and an entity node if the entity is extracted from the document. 3. an edge between a candidate node and an entity node if the entity is a mention of the candidate. 4. an edge between two entity nodes if they are extracted from the same document. 5. an edge between two entity nodes if they are mentions of the same candidate or query subject and they are extracted from different documents. 6. all candidate nodes connect with each other. 7. entity nodes that do not meet previous conditions are connected. Type 4, 5, 7 edges are also employed in (De Cao et al., 2018) where the authors show the effectiveness of those different types of edges. Similarly, Figure 3: A toy example of HDE graph. The dash dot lines connecting documents (green nodes) and candidates (yellow nodes) correspond to type 1 edge. The normal dash lines connecting documents and entities (blue nodes) correspond to type 2 edge. The square dot lines connecting entities and candidates correspond to type 3 edge. The red solid line connecting two entities correspond to type 4 edge. The purple solid line correspond to type 5 edge. The black solid lines connecting two candidates correspond to type 6 edge. For good visualization, we ignore the type 7 edge in this figure. we treat these different edges differently to make information propagate differently over these seven different types of edges. More details will be introduced in next paragraph about message passing over the HDE graph. In Figure 3, we illustrate a toy example of the proposed HDE graph. Message passing: we define how information propagates over the graph in order to do reasoning over the HDE graph. Different variants of GNN have different implementations of message passing strategies. In this study, we follow the message passing design in GCN (Kipf and Welling, 2016; De Cao et al., 2018) as it gives good performance on validation set compared to other strategies (Veliˇckovi´c et al., 2017; Xu et al., 2018). Generally, the message passing over graphs can be achieved in two steps: aggregation and combination (Hamilton et al., 2017), and this process can be conducted multiple times (usually referred as layers or hops in GNN literature). Here, we give the aggregation and combination formulation of the message passing over the proposed HDE graph. The first step aggregates information from neighbors of each node, which can be formulated as zk i = X r∈R 1 |N r i | X j∈N r i fr(hk j ), (8) 2709 where R is the set of all edge types, N r i is the neighbors of node i with edge type r and hk j is the node representation of node j in layer k (h0 j initialized with self-attention outputs). |·| indicates the size of the neighboring set. fr defines a transformation on the neighboring node representations, and can be implemented with a MLP. zk i represents the aggregated information in layer k for node i, and can be combined with the transformed node i representation: uk i = fs(hk i ) + zk i , (9) where fs can also be implemented with a MLP. It has been shown that GNN suffers from the smoothing problem if the number of layers is large (Kipf and Welling, 2016). The smoothing problem can result in similar nodes representation and lose the discriminative ability when doing classification on nodes. To tackle this problem, we add a gating mechanism (Gilmer et al., 2017) on the combined information uk i . gk i = sigmoid(fg([uk i ; hk i ])) (10) hk+1 i = tanh(uk i ) ⊙gk i + hk i ⊙(1 −gk i ) (11) sigmoid(·) denotes the sigmoid function on transformed concatenation of uk i and hk i . gk i is then applied to the combined information to control the amount information from computed update or from the original node representation. tanh(·) functions as a non-linear activation function. ⊙ denotes element-wise multiplication. In this study, fr, fs and fg are all implemented with single-layer MLPs, the output dimension of which is 2h. After K times message passing, all candidate, document and entity nodes will have their final updated node representation. 3.3 Score accumulation The final node representations of candidate and entity nodes corresponding to mentions of candidates are used to calculate classification scores. This procedure can be formulated as a = fC(HC) + ACCmax(fE(HE)), (12) where HC ∈RC×2h is the node representation of all candidate nodes and C is the number of candidates. HE ∈RM×2h is the node representation of all entity nodes that correspond to candidates, and M is the number of those nodes. ACCmax is an operation that takes the maximum over scores of entities that belong to the same candidate. fC and fE are implemented with two-layer MLPs with tanh activation function. The hidden layer size is half of the input dimension, and the output dimension is 1. We directly sum the scores from candidate nodes and entity nodes as the final scores over multiple candidates. Thus, the output score vector a ∈RC×1 gives a distribution over all candidates. Since the task is multi-class classification, we use cross-entropy loss as training objective which takes a and the labels as input. 4 Experiments 4.1 Dataset We use WIKIHOP (Welbl et al., 2018) to validate the effectiveness of our proposed model. The query of WIKIHOP is constructed with entities and relations from WIKIDATA, while supporting documents are from WIKIREADING (Hewlett et al., 2016). A bipartite graph connecting entities and documents is first built and the answer for each query is located by traversal on this graph. Candidates that are type-consistent with the answer and share the same relation in query with the answer are included, resulting in a set of candidates. Thus, WIKIHOP is a multi-choice style reading comprehension data set. There are totally about 43K samples in training set, 5K samples in development set and 2.5K samples in test set. The test set is not provided and can only be evaluated on blindly. The task is to predict the correct answer given a query and multiple supporting documents. In the experiment, we train our proposed model on all training samples in WIKIHOP, and tune model hyperparameters on all samples in development set. We only evaluate our proposed model on the unmasked version of WIKIHOP. 4.2 Experimental settings Queries, support documents and candidates are tokenized into word sequences with NLTK (Loper and Bird, 2002). We empirically split the query into relation and subject entity. Exact matching strategy is employed to locate mentions of both subject entity and candidates in supporting documents. 300-dimensional GLoVe embeddings (with 840B tokens and 2.2M vocabulary size) (Pennington et al., 2014) and 100-dimensional 2710 Single models Accuracy (%) Dev Test BiDAF 42.9 Coref-GRU(Dhingra et al., 2018) 56.0 59.3 MHQA-GRN(Song et al., 2018) 62.8 65.4 Entity-GCN(De Cao et al., 2018) 64.8 67.6 CFC(Zhong et al., 2019) 66.4 70.6 Kundu et al. (2018) 67.1 DynSAN* 71.4 Proposed 68.1 70.9 Ensemble models Entity-GCN(De Cao et al., 2018) 68.5 71.2 DynSAN* 73.8 Proposed 70.9 74.3 Table 1: Performance comparison among different models on WIKIHOP development and test set. The results of “BiDAF” are presented in the paper by Welbl et al. (2018). Models annotated with “*” are unpublished but available on WIKIHOP leaderboard. “-” indicates unavailable numbers. character n-gram embeddings (Hashimoto et al., 2017) are used to convert words into 400dimensional vector representations. Out of vocabulary words are initialized with random vectors. The embedding matrices are not updated during training. The proposed model is implemented with PyTorch (Paszke et al., 2017). More details about experimental and hyperparameter settings can be found in supplementary materials. The performance on development set is measured after each training epoch, and the model with the highest accuracy is saved and submitted to be evaluated on the blind test set. We will make our code publicly available after the review process. We also prepared an ensemble model consisting of 15 models with different hyperparameter settings and random seeds. We used the simple majority voting strategy to fuse the candidate predictions of different models together. 4.3 Results In Table 1, we show the results of the our proposed HDE graph based model on both development and test set and compare it with previously published results. We show that our proposed HDE graph based model improves the published state-of-the-art accuracy on development set from 67.1% (Kundu et al., 2018) to 68.1%, on the blind test set from 70.6% (Zhong et al., 2019) to 70.9%. Compared to the best single model “DynSAN” Model Accuracy (%) Dev ∆ Full model 68.1 - HDE graph 65.5 2.6 - different edge types 66.7 1.4 - candidate nodes scores 67.1 1.0 - entity nodes scores 66.6 1.5 - candidate nodes 66.2 1.9 - document nodes 67.6 0.5 - entity nodes 63.6 4.5 Table 2: Ablation results on the WIKIHOP dev set. Model Single-follow Multi-follow With HDE graph 67.8 71.0 Without HDE graph 66.7 67.0 Table 3: Accuracy(%) comparison under different types of samples. (unpublished) on WIKIHOP leaderboard, our proposed model is still 0.5% worse. Compared to two previous studies using GNN for multi-hop reading comprehension (Song et al., 2018; De Cao et al., 2018), our model surpasses them by a large margin even though we do not use better pre-trained contextual embedding ELMo (Peters et al., 2018). For the ensemble models, our proposed system achieves the state-of-the-art performance, which is also 0.2% higher than the reported human performance (Welbl et al., 2018). Even though our single model is a little worse than the “DynSAN”, our ensemble model is better than both the ensembled “DynSAN” and the ensembled “Entity-GCN”. 4.4 Ablation studies In order to better understand the contribution of different modules to the performance, we conduct several ablation studies on the development set of WIKIHOP. If we remove the proposed HDE graph and directly use the representations of candidates and entities corresponding to mentions of candidates (equation 7) for score accumulation, the accuracy on WIKIHOP development set drops 2.6% absolutely. This proves the efficacy of the proposed HDE graph on multi-hop reasoning across multiple documents. If we treat all edge types equally without using different GNN parameters for different edge types (equation 9), the accuracy drops 1.4%, which indicates that different information encoded by differ2711 ent types of edges is also important to retain good performance; If only scores of entity nodes (right part of equation 12) are considered in score accumulation, the accuracy on dev set degrades by 1.0%; if only scores of candidates nodes (left part of equation 12) are considered, the accuracy degrades by 1.5%. This means that the scores on entity nodes contribute more to the classification, which is reasonable because entities carry context information in the document while candidates do not. We also investigate the effect of removing different types of nodes. Note that removing nodes is not the same as removing scores from candidate/entity nodes — it means we do not use the scores on these nodes during score accumulation but nodes still exist during message passing on the HDE graph. However, removing one type of nodes means the nodes and corresponding edges do not exist in the HDE graph. The ablation shows that removing entity nodes results in the largest degradation of performance while removing document nodes result in the least degradation. This finding is consistent with the study by (De Cao et al., 2018) where they emphasize the importance of entities in multi-hop reasoning. The small contribution of document nodes is probably caused by too much information loss during self-attentive pooling over long sequences. Better ways are needed to encode document information into graph. More ablation studies are included in the supplementary materials due to space constraint. 4.5 Result analysis To investigate how the HDE graph helps multi-hop reasoning, we conduct experiments on WIKIHOP development set where we discard the HDE graph and only use the candidate and entity representations output by self-attention. In Table 3, “Singlefollow” (2069 samples in the dev set) means a single document is enough to answer the query, while “Multi-follow” (2601 samples) means multiple documents are needed. These information is provided in (Welbl et al., 2018). We observe in Table 2 that the performance is consistently better for “with HDE graph” in both cases. In “Singlefollow” case the absolute accuracy improvement is 1.1%, while a significant 4.0% improvement is achieved in the “Multi-follow” case, which has even more samples than “Single-follow” case. This proves that the proposed HDE graph is good 0 5 10 15 20 25 30 Number of support docments. 50 100 150 200 250 300 Number of samples in dev set 0 20 40 60 80 100 Accuracy (%) Figure 4: Plots between number of support documents (x-axis) and number of examples (left y-axis), and between number of support documents and accuracy (right y-axis). 0 5 10 15 20 25 30 35 40 45 Number of candidates. 50 100 150 200 250 300 Number of samples in dev set 0 20 40 60 80 100 Accuracy (%) Figure 5: Plots between number of candidates (x-axis) and number of examples (left y-axis), and between number of candidates and accuracy (right y-axis). at reasoning over multiple documents. We also investigate how our model performs w.r.t. the number of support documents and number of candidates given an input sample. In Figure 4, the blue line with square markers shows the number of support documents in one sample (x-axis) and the corresponding frequencies in the development set (y-axis). The orange line with diamond markers shows the change of accuracy with the increasing of number of support documents. We choose the number of support documents with more than 50 appearances in the development set. For example, there are about 300 samples with 5 support documents and the accuracy of our model on these 300 samples is about 80%. Overall, we find the accuracy decreases with the increasing number of support documents. This is reasonable because more documents possibly means more entities and bigger graph, and is more challenging for reasoning. Figure 5 indicates the 2712 similar trend (when the number of candidates are less than 20) with the increasing number of candidates, which we believe is partly caused by the larger HDE graph. Also, more candidates cause more confusion in the selection. 5 Conclusion We propose a new GNN-based method for multihop RC across multiple documents. We introduce the HDE graph, a heterogeneous graph for multiple-hop reasoning over nodes representing different granularity levels of information. We use co-attention and self-attention to encode candidates, documents, entities of mentions of candidates and query subjects into query-aware representations, which are then employed to initialize graph node representations. Evaluated on WIKIHOP, our end-to-end trained single neural model delivers competitive results while our ensemble model achieves the state-of-the-art performance. In the future, we would like to investigate explainable GNN for this task, such as explicit reasoning path in (Kundu et al., 2018), and work on other data sets such as HotpotQA. 6 Acknowledgements We would like to thank Johannes Welbl from University College London for running evaluation on our submitted model. References Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957–1967. Kyunghyun Cho, B van Merrienboer, Caglar Gulcehre, F Bougares, H Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderdecoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). Nicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. arXiv preprint arXiv:1808.09920. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 42–48. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1263–1272. JMLR. org. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034. Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923– 1933. Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. arXiv preprint arXiv:1608.03542. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface:a challenge set for reading comprehension over multiple sentences. In Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL). Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Tom´aˇs Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´aabor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association of Computational Linguistics, 6:317–328. Souvik Kundu, Tushar Khot, and Ashish Sabharwal. 2018. Exploiting explicit paths for multihop reading comprehension. arXiv preprint arXiv:1811.01127. Pengfei Liu, Shuaichen Chang, Xuanjing Huang, Jian Tang, and Jackie Chi Kit Cheung. 2018. Contextualized non-local neural networks for sequence learning. arXiv preprint arXiv:1811.08600. Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. arXiv preprint cs/0205028. 2713 Diego Marcheggiani, Joost Bastings, and Ivan Titov. 2018. Exploiting semantics in neural machine translation with graph convolutional networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 486–492. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 784–789. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018. Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks. arXiv preprint arXiv:1809.02040. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2018. Improving machine reading comprehension with general reading strategies. arXiv preprint arXiv:1810.13441. Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018. Densely connected attention propagation for reading comprehension. In Advances in Neural Information Processing Systems, pages 4911–4922. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association of Computational Linguistics, 6:287–302. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer networks for task-oriented dialogue. arXiv preprint arXiv:1901.04713. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380. Liang Yao, Chengsheng Mao, and Yuan Luo. 2018. Graph convolutional networks for text classification. arXiv preprint arXiv:1809.05679. Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215. Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. arXiv preprint arXiv:1901.00603.
2019
260
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2714–2725 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2714 Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension Yichen Jiang∗ Nitish Joshi∗ Yen-Chun Chen Mohit Bansal UNC Chapel Hill {yichenj, nitish, yenchun, mbansal}@cs.unc.edu Abstract Multi-hop reading comprehension requires the model to explore and connect relevant information from multiple sentences/documents in order to answer the question about the context. To achieve this, we propose an interpretable 3-module system called ExplorePropose-Assemble reader (EPAr). First, the Document Explorer iteratively selects relevant documents and represents divergent reasoning chains in a tree structure so as to allow assimilating information from all chains. The Answer Proposer then proposes an answer from every root-to-leaf path in the reasoning tree. Finally, the Evidence Assembler extracts a key sentence containing the proposed answer from every path and combines them to predict the final answer. Intuitively, EPAr approximates the coarse-to-fine-grained comprehension behavior of human readers when facing multiple long documents. We jointly optimize our 3 modules by minimizing the sum of losses from each stage conditioned on the previous stage’s output. On two multi-hop reading comprehension datasets WikiHop and MedHop, our EPAr model achieves significant improvements over the baseline and competitive results compared to the state-of-the-art model. We also present multiple reasoningchain-recovery tests and ablation studies to demonstrate our system’s ability to perform interpretable and accurate reasoning.1 1 Introduction The task of machine reading comprehension and question answering (MRC-QA) requires the model to answer a natural language question by finding relevant information and knowledge in a given natural language context. Most MRC ∗equal contribution; part of this work was done during the second author’s internship at UNC (from IIT Bombay). 1Our code is publicly available at: https://github.com/jiangycTarheel/EPAr datasets require single-hop reasoning only, which means that the evidence necessary to answer the question is concentrated in a single sentence or located closely in a single paragraph. Such datasets emphasize the role of locating, matching, and aligning information between the question and the context. However, some recent multi-document, multi-hop reading comprehension datasets, such as WikiHop and MedHop (Welbl et al., 2017), have been proposed to further assess MRC systems’ ability to perform multi-hop reasoning, where the required evidence is scattered in a set of supporting documents. These multi-hop tasks are much more challenging than previous single-hop MRC tasks (Rajpurkar et al., 2016, 2018; Hermann et al., 2015; Nguyen et al., 2016; Yang et al., 2015) for three primary reasons. First, the given context contains a large number of documents (e.g., 14 on average, 64 maximum for WikiHop). Most existing QA models cannot scale to the context of such length, and it is challenging to retrieve a reasoning chain of documents with complete information required to connect the question to the answer in a logical way. Second, given a reasoning chain of documents, it is still necessary for the model to consider evidence loosely distributed in all these documents in order to predict the final answer. Third, there could be more than one logical way to connect the scattered evidence (i.e., more than one possible reasoning chain) and hence this requires models to assemble and weigh information collected from every reasoning chain before making a unified prediction. To overcome the three difficulties elaborated above, we develop our interpretable 3-module system based on examining how a human reader would approach a question, as shown in Fig. 1a and Fig. 1b. For the 1st example, instead of reading the entire set of supporting documents sequen2715 The Haunted Castle ( Dutch : Spookslot ) is a haunted attraction in the amusement park Efteling in the Netherlands . It was designed by Ton van de Ven and ... Efteling is a fantasy-themed amusement park in Kaatsheuvel in the Netherlands. The attractions are based on elements from ancient myths and legends, fairy tales, fables, and folklore. Kaatsheuvel is a village in the Dutch province of North Brabant, situated ... it is the largest village in and the capital of the municipality of Loon op Zand, which also consists ... Query subject: The Haunted Castle Query body: located_in_the_administrative_territorial_entity Answer: Loon op Zand The Polsterberg Pumphouse ( German : Polsterberger Hubhaus ) is a pumping station above the Dyke Ditch in the Upper Harz in central Germany ... The Dyke Ditch is the longest artificial ditch in the Upper Harz in central Germany. The Upper Harz refers to ... the term Upper Harz covers the area of the seven historical mining towns (\"Bergst\u00e4dte\") - Clausthal, Zellerfeld, Andreasberg, Altenau, Lautenthal, Wildemann and Grund - in the present-day German federal state of Lower Saxony. Query subject: Polsterberg Pumphouse Query body: located_in_the_administrative_territorial_entity Answer: Lower Saxony (a) The Haunted Castle ( Dutch : Spookslot ) is a haunted attraction in the amusement park Efteling in the Netherlands . It was designed by Ton van de Ven and ... Efteling is a fantasy-themed amusement park in Kaatsheuvel in the Netherlands. The attractions are based on elements from ancient myths and legends, fairy tales, fables, and folklore. Kaatsheuvel is a village in the Dutch province of North Brabant, situated ... it is the largest village in and the capital of the municipality of Loon op Zand, which also consists ... Query subject: The Haunted Castle Query body: located_in_the_administrative_territorial_entity Answer: Loon op Zand The Polsterberg Pumphouse ( German : Polsterberger Hubhaus ) is a pumping station above the Dyke Ditch in the Upper Harz in central Germany ... The Dyke Ditch is the longest artificial ditch in the Upper Harz in central Germany. The Upper Harz refers to ... the term Upper Harz covers the area of the seven historical mining towns (\"Bergst\u00e4dte\") - Clausthal, Zellerfeld, Andreasberg, Altenau, Lautenthal, Wildemann and Grund - in the present-day German federal state of Lower Saxony. Query subject: Polsterberg Pumphouse Query body: located_in_the_administrative_territorial_entity Answer: Lower Saxony (b) Figure 1: Two examples from the QAngaroo WikiHop dataset where it is necessary to combine information spread across multiple documents to infer the correct answer. (a): The hidden reasoning chain of 3 out of a total of 37 documents for a single query. (b): Two possible reasoning chains that lead to different answers: “Upper Harz” and “Lower Saxony”, while the latter (green solid arrow) fits better with query body “administrative territorial entity”. tially, she would start from the document that is directly related to the query subject (e.g., “The Haunted Castle”). She could then read the second and third document by following the connecting entities “park Efteling” and “Kaatsheuvel”, and uncover the answer “Loon op Zand” by comparing phrases in the final document to the query. In this way, the reader accumulates knowledge about the query subject by exploring inter-connected documents, and eventually uncovers the entire reasoning chain that leads to the answer. Drawing inspiration from this coarse (document-level) plus finegrained (word-level) comprehension behavior, we first construct a T-hop Document Explorer model, a hierarchical memory network, which at each recurrent hop, selects one document to read, updates the memory cell, and iteratively selects the next related document, overall constructing a reasoning chain of the most relevant documents. We next introduce an Answer Proposer that performs query-context reasoning at the word-level on the retrieved chain and predicts an answer. Specifically, it encodes the leaf document of the reasoning chain while attending to its ancestral documents, and outputs ancestor-aware word representations for this leaf document, which are compared to the query to propose a candidate answer. However, these two components above cannot handle questions that allow multiple possible reasoning chains that lead to different answers, as shown in Fig. 1b. After the Document Explorer selects the 1st document, it finds that both the 2nd and 3rd documents are connected to the 1st document via entities “the Dyke Ditch” and “Upper Harz” respectively. This is a situation where a single reasoning chain diverges into multiple paths, and it is impossible to tell which path will lead to the correct answer before finishing exploring all possible reasoning chains/paths. Hence, to be able to weigh and combine information from multiple reasoning branches, the Document Explorer is rolled out multiple times to represent all the divergent reasoning chains in a ‘reasoning tree’ structure, so as to allow our third component, the Evidence Assembler, to assimilate important evidence identified in every reasoning chain of the tree to make one final, unified prediction. To do so, the Assembler selects key sentences from each root-to-leaf document path in the ‘reasoning tree’ and forms a new condensed, salient context which is then bidirectionally-matched with the query representation to output the final prediction. Via this procedure, evidence that was originally scattered widely across several documents is now collected concentratedly, hence transforming the task to a scenario where previous standard phrase-matching style QA models (Seo et al., 2017; Xiong et al., 2017; Dhingra et al., 2017) can be effective. Overall, our 3-module, multi-hop, reasoningtree based EPAr (Explore-Propose-Assemble reader) closely mimics the coarse-to-fine-grained reading and reasoning behavior of human readers. We jointly optimize this 3-module system by having the following component working on the outputs from the previous component and minimizing the sum of the losses from all 3 modules. The Answer Proposer and Evidence Assembler are trained with maximum likelihood using ground-truth answers as labels, while the Document Explorer is weakly supervised by heuristic reasoning chains constructed via TF-IDF and documents with the ground-truth answer. On WikiHop, our system achieves the highestreported dev set result of 67.2%, outperforming all published models2 on this task, and 69.1% 2At the time of submission: March 3rd, 2019. 2716 Query Subject ... ( aware) proposed candidate 0 proposed candidate 1 proposed candidate 4 A sentence in containing candidate 0 A sentence in containing candidate 1 A sentence in containing candidate 4 synthesized context Final prediction Attention DE AP AP AP AP BiDAF EA Values: softmax Keys: sampling I Hiearchical, Key-value Memory Network: DE ... { , , ... , } ( aware) ( aware) A sentence in containing query subject Query Body document-reasoning tree ... ... ... ... ... Figure 2: The full architecture of our 3-module system EPAr, with the Document Explorer (DE, left), Answer Proposer (AP, middle), and Evidence Assembler (EA, right). accuracy on the hidden test set, which is competitive with the current leaderboard state-of-theart. On MedHop, our system outperforms all previous models, achieving the new state-of-the-art test leaderboard accuracy. It also obtains statistically significant (p < 0.01) improvement over our strong baseline on the two datasets. Further, we show that our Document Explorer combined with 2-hop TF-IDF retrieval is substantially better than two TF-IDF-based retrieval baselines in multiple reasoning-chain recovery tests including on human-annotated golden reasoning chains. Next, we conduct ablations to prove the effectiveness of the Answer Proposer and Evidence Assembler in comparison with several baseline counterparts, and illustrate output examples of our 3-module system’s reasoning tree. 2 Model In this section, we describe our 3-module system that constructs the ‘reasoning tree’ of documents and predicts the answer for the query. Formally, given a query q and a corresponding set of supporting documents D = {di}N i=1, our system tries to find a reasoning chain of documents d′ 1, . . . , d′ T , d′ i ∈D.3 The information from these selected documents is then combined to predict the answer among the given answer candidates. In the WikiHop and MedHop datasets, a query consists of a subject qsub (e.g., “The Haunted Castle” in Fig. 1a) and a body qbod (e.g., “located in the administrative territorial entity”). There is one single correct answer a (e.g., “Loon op Zand”) in the set of candidate answers A = {cl}L l=1 such that the relation qbod holds true between qsub and a. 3In WikiHop dataset, T ≤3. 2.1 Retrieval and Encoding In this section, we describe the pre-processing document retrieval and encoding steps before introducing our three modules of EPAr. We adopt a 2-hop document retrieval procedure to reduce the number of supporting documents that are fed to our system. We first select one document with the shortest TF-IDF distance to the query. We then rank the remaining documents according to their TF-IDF distances to the first selected document and add the top N′−1 documents to form the context with a total of N′ documents for this query. Adding this preprocessing step is not only helpful in reducing GPU memory consumption but also helps bootstrap the training by reducing the search space of the Document Explorer (Sec. 2.2). We then use a Highway Network (Srivastava et al., 2015) of dimension d, which merges the character embedding and GloVe word embedding (Pennington et al., 2014), to get the word representations for the supporting documents and query4. This gives three matrices: X ∈RN′×K×d, Qsub ∈RJs×d and Qbod ∈RJb×d, K, Js, Jb are the lengths of supporting documents, query body, and query subject respectively. We then apply a bi-directional LSTM-RNN (Hochreiter and Schmidhuber, 1997) of v hidden units to get the contextual word representations for the documents H = {h1, · · · , hN′} s.t. hi ∈RK×2v and the query Usub ∈RJs×2v, Ubod ∈RJb×2v. Other than the word-level encoding, we also collect compact representations of all the supporting docu4Unlike previous works (Welbl et al., 2017; Dhingra et al., 2018; De Cao et al., 2018; Song et al., 2018a) that concatenate supporting documents together to form a large context, we instead maintain the document-level hierarchy and encode each document separately. 2717 ments, denoted as P = {p1, · · · , pN′}, by applying the self-attention mechanism in Zhong et al. (2019) (see details in appendix). We obtain embeddings for each candidate ci ∈{c1, c2, .., cL} using the average-over-word embeddings of the first mention5 of the candidate in H. 2.2 Document Explorer Our Document Explorer (DE, shown in the left part of Fig. 2) is a hierarchical memory network (Chandar et al., 2016). It utilizes the reduced document representations P = {p1, p2, · · · , pN′} and their corresponding word-level representations H = {h1, h2, · · · , hN′} as the key-value knowledge base and maintains a memory m using a Gated Recurrent Unit (GRU) (Cho et al., 2014). At every step, the DE selects a document which is related to the current memory state and updates the internal memory. This iterative procedure thus constructs a reasoning chain of documents. Read Unit At each hop t, the model computes a document-selection distribution P over every document based on the bilinear-similarity between the memory state m and document representations P using the following equations6: xn = pT nWrmt χ = softmax(x) P(di) = χi The read unit looks at all document (representation) P and selects (samples) a document di ∼ P. The write operation updates the internal state (memory) using this sampled document. Write Unit After the model selects di ∈D, the model then computes a distribution over every word in document di based on the similarity between the memory state m and its word representations hi ∈H. This distribution is then used to compute the weighted average of all word representations in document di. We then feed this weighted average ˜h as the input to the GRU cell and update its memory state m (subscript i is omitted for simplicity): wk = hT k Wwm ω = softmax(w) ˜h = XK k=1 hkωk mt+1 = GRU(˜h, mt) (1) Combining the ‘read’ and ‘write’ operations described above, we define a recurrent function: 5We tried different approaches to make use of all mentions of every candidate, but observe no gain in final performance. 6We initialize the memory state with the last state of the query subject Usub to make first selected document directly conditioned on the query subject. (ˆht+1, mt+1) = fDE(mt) such that ˆht+1 ∈H and ˆht ̸= ˆht+1. Therefore, unrolling the Document Explorer for T hops results in a sequence of non-repeating documents ˆH = {ˆh1, · · · , ˆhT } such that each document ˆhi is selected iteratively based on the current memory state building up one reasoning chain of documents. In practice, we roll out DE multiple times to obtain a document-search ‘reasoning tree’, where each root-to-leaf path corresponds to a query-to-answer reasoning chain. 2.3 Answer Proposer The Answer Proposer (AP, shown in the middle part of Fig. 2) takes as input a single chain of documents {ˆh1, · · · , ˆhT } from one of the chains in the ‘reasoning tree‘ created by the DE, and tries to predict a candidate answer from the last document ˆhT in that reasoning chain. Specifically, we adopt an LSTM-RNN with an attention mechanism (Bahdanau et al., 2015) to encode the ˆhT to ancestor-aware representations y by attending to [ˆh1,...,T−1]. The model then computes a distribution over words ˆhi T ∈ˆhT based on the similarity between y and the query representation. This distribution is then used to compute the weighted average of word representations {h1 T , h2 T , · · · , hK T }. Finally, AP proposes an answer among all candidates {c1, · · · , cL} that has the largest similarity score with this weighted average ˜hT . ek i = vT tanh(Whˆhi cct + Wssk + b) ak = softmax(ek); ck = X i ak i hi cct yk = LSTM(ˆhk−1 T , sk−1, ck−1) wk = α(yk, us) + α(yk, ub); ϵ = softmax(w) a = XK k=1 ˆhk T ϵk; Scorel = β(cl, a) (2) where ˆhcct = [ˆh1,...,T−1] is the concatenation of documents in the word dimension; us and ub are the final states of Usub and Ubod respectively, and sk is the LSTM’s hidden states at the kth step. The Answer Proposer proposes the candidate with the highest score among {c1, · · · , cL}. All computations in Eqn. 2 that involve trainable parameters are marked in bold.7 This procedure produces ancestor-aware word representations that encode the interactions between the leaf document and ancestral document, and hence models the multihop, cross-document reasoning behavior. 7See appendix for the definition of the similarity functions α and β. 2718 2.4 Evidence Assembler As shown in Fig. 1b, it is possible that a reasoning path could diverge into multiple branches, where each branch represents a unique, logical way of retrieving inter-connected documents. Intuitively, it is very difficult for the model to predict which path to take without looking ahead. To solve this, our system first explores multiple reasoning chains by rolling out the Document Explorer multiple times to construct a ‘reasoning tree’ of documents, and then aggregates information from multiple reasoning chains using a Evidence Assembler (EA, shown in the right part of Fig. 2), to predict the final answer. For each reasoning chain, the Assembler first selects one sentence that contains the candidate answer proposed by the Answer Proposer and concatenates all these sentences into a new document h′. This constructs a highly informative and condensed context, at which point previous phrase-matching style QA models can work effectively. Our EA uses a bidirectional attention flow model (Seo et al., 2017) to get a distribution over every word in h′ and compute the weighted average of word representations {h′1, · · · , h′K} as ˜h′. Finally, the EA selects the candidate answer of the highest similarity score w.r.t. ˜h′. 2.5 Joint Optimization Finally, we jointly optimize the entire model using the cross-entropy losses from our Document Explorer, Answer Proposer, and Evidence Assembler. Since the Document Explorer samples documents from a distribution, we use weak supervision at the first and the final hops to account for the otherwise non-differentiabilty in the case of end-to-end training. Specifically, we use the document having the shortest TF-IDF distance w.r.t. the query subject to supervise the first hop and the documents which contain at least one mention of the answer to supervise the last hop. This allows the Document Explorer to learn the chain of documents leading to the document containing the answer from the document most relevant to the query subject. Since there can be multiple documents containing the answer, we randomly sample a document as the label at the last hop. For the Answer Proposer and Evidence Assembler, we use crossentropy loss from the answer selection process. 3 Experiments and Results 3.1 Datasets and Metrics We evaluate our 3-module system on the WikiHop and the smaller MedHop multi-hop datasets from QAngaroo (Welbl et al., 2017). For the WikiHop dev set, each instance is also annotated as “follows” or “not follows”, i.e., whether the answer can be inferred from the given set of supporting documents, and “single” or “multiple”, indicating whether the complete reasoning chain comprises of single or multiple documents. This allows us to evaluate our system on less noisy data and to investigate its strength in queries requiring different levels of multi-hop reasoning. Please see appendix for dataset and metric details. 3.2 Implementation Details For WikiHop experiments, we use 300-d GloVe word embeddings (Pennington et al., 2014) for our main full-size ‘EPAr’ model and 100-d GloVE word embeddings for our smaller ‘EPAr’ model which we use throughout the Analysis section for time and memory feasibility. We also use the last hidden state of the encoding LSTM-RNN to get the compact representation for all supporting documents in case of smaller model, in contrast to self-attention (Sec. B in Appendix) as in the full-size ‘EPAr’ model. The encoding LSTMRNN (Hochreiter and Schmidhuber, 1997) has 100-d hidden size for our ‘EPAr’ model whereas the smaller version has 20-d hidden size. The embedded GRU (Cho et al., 2014) and the LSTM in our Evidence Assembler have the hidden dimension of 80. In practice, we only apply TF-IDF based retrieval procedure to our Document Explorer and Answer Proposer during inference, and during training time we use the full set of supporting documents as the input. This is because we observed that the Document Explorer overfits faster in the reduced document-search space. For the Evidence Assembler, we employ both the TF-IDF retrieval and Document Explorer to get the ‘reasoning tree’ of documents, at both training and testing time. We refer to the Sec. E in the appendix for the implementation details of our MedHop models. 3.3 Results We first evaluate our system on the WikiHop dataset. For a fair comparison to recent works (De Cao et al., 2018; Song et al., 2018a; Raison et al., 2018), we report our “EPAr” with 2719 Dev Test BiDAF (Welbl et al., 2017)⋆ 42.9 Coref-GRU (Dhingra et al., 2018) 56.0 59.3 WEAVER (Raison et al., 2018) 64.1 65.3 MHQA-GRN (Song et al., 2018a) 62.8 65.4 Entity-GCN (De Cao et al., 2018) 64.8 67.6 BAG (Cao et al., 2019) 66.5 69.0 CFC (Zhong et al., 2019) 66.4 70.6 EPAr (Ours) 67.2 69.1 Table 1: Dev set and Test set accuracy on WIKIHOP dataset. The model marked with ⋆does not use candidates and directly predict the answer span. EPAr is our system with TF-IDF retrieval, Document Explorer, Answer Proposer and Evidence Assembler. follow follow full + multiple + single BiDAF Baseline 62.8 63.1 58.4 DE+AP+EA⋆ 65.2 66.9 61.1 AP+EA 68.7 67.0 62.8 DE+AP+EA 69.4 70.6 64.7 DE+AP+EA† 71.8 73.8 66.9 DE+AP+EA†+SelfAttn 73.5 72.9 67.2 Table 2: Ablation accuracy on WIKIHOP dev set. The model marked with ⋆does not use the TFIDF-based document retrieval procedure. The models marked with † are our full EPAr systems with 300-d word embeddings and 100-d LSTM-RNN hidden size (same as the last row of Table 1), while the 4th row represents the smaller EPAr system. 300-d embeddings and 100-d hidden size of the encoding LSTM-RNN. As shown in Table 1, EPAr achieves 67.2% accuracy on the dev set, outperforming all published models, and achieves 69.1% accuracy on the hidden test set, which is competitive with the current state-of-the-art result.8 Next, in Table 2, we further evaluate our EPAr system (and its smaller-sized and ablated versions) on the “follows + multiple”, “follows + single”, and the full development set. First, note that on the full development set, our smaller system (“DE+AP+EA”) achieves statistically significant (p-value < 0.01)9 improvements over the BiDAF baseline and is also comparable to De Cao et al. (2018) on the development set (64.7 vs. 64.8).10 8Note that there also exists a recent anonymous unpublished entry on the leaderboard with 70.9% accuracy, which is concurrent to our work. Also note that our system achieves these strong accuracies even without using pretrained language model representations like ELMo (Peters et al., 2018) or BERT (Devlin et al., 2018), which have been known to give significant improvements in machine comprehension and QA tasks. We leave these gains for future work. 9All stat. signif. is based on bootstrapped randomization test with 100K samples (Efron and Tibshirani, 1994). 10For time and memory feasibility, we use this smaller Query subject: Sulphur Spring, Query body: located in the administrative territorial entity Hayden Valley is a large, sub-alpine valley in Yellowstone National Park straddling the Yellowstone River ... 1 Sulphur Spring (also known as Crater Hills Geyser), is a geyser in the Hayden Valley region of Yellowstone National Park in the United States . ... 0 The Yellowstone River is a tributary of the Missouri River ... Yellowstone Falls consist of two major waterfalls on the Yellowstone River, within Wyoming, United States. ... Yellowstone National Park is a national park located in the U.S. states of Wyoming, Montana and Idaho. ... 2 3 4 Missouri Wyoming Wyoming? Montana? Idaho? 0 1 4 2 3 Yellowstone Figure 3: A ‘reasoning tree’ with 4 leaves that lead to different answers (marked in bold). The ground-truth answer is marked in red additionally. Moreover, we see that EPAr is able to achieve high accuracy in both the examples that require multi-hop reasoning (“follows + multiple”), and other cases where a single document suffices for correctly answering the question (“follows + single”), suggesting that our system is able to adjust to examples of different reasoning requirements. The evaluation results further demonstrate that our Document Explorer combined with TFIDF-based retrieval (row ‘DE+AP+EA’) consistently outperforms TF-IDF alone (row ‘AP+EA’) or the Document Explorer without TF-IDF (row ‘DE+AP+EA⋆’ in Table 2), showing that our 2hop TF-IDF document retrieval procedure is able to broadly identify relevant documents and further aid our Document Explorer by reducing its search space. Finally, comparing the last two rows in Table 2 shows that using self-attention (Zhong et al., 2019) to compute the document representation can further improve the full-sized system. We show an example of the ‘reasoning tree’ constructed by the Document Explorer and the correct answer predicted by the Evidence Assembler in Fig. 3. We report our system’s accuracy on the MedHop dataset in Table 3. Our best system achieves 60.3 on the hidden test set11, outperforming all current models on the leaderboard. However, as reported by Welbl et al. (2017), the original MedHop dataset suffers from a candidate frequency imbalance issue that can be exploited by certain strong model with 100-d word embeddings and 20-d LSTMRNN hidden size (similar to baselines in Welbl et al. (2017)) in all our analysis/ablation results (including Sec. 4). 11The masked MedHop test set results use the smaller size model, because this performed better on the masked dev set. 2720 Test Test (Masked) FastQA⋆(Weissenborn et al., 2017) 23.1 31.3 BiDAF⋆(Seo et al., 2017) 33.7 47.8 CoAttention 58.1 Most Frequent Candidate⋆ 10.4 58.4 EPAr (Ours) 41.6 60.3 Table 3: Test set accuracy on MEDHOP dataset. The results marked with ⋆are reported in (Welbl et al., 2017). R@1 R@2 R@3 R@4 R@5 Random 11.2 17.3 27.6 40.8 50.0 1-hop TFIDF 32.7 48.0 56.1 63.3 70.4 2-hop TFIDF 42.9 56.1 70.4 78.6 82.7 DE 38.8 50.0 65.3 73.5 83.7 TFIDF+DE 44.9 64.3 77.6 82.7 90.8 Table 4: Recall-k score is the % of examples where one of the human-annotated reasoning chains is recovered in the top-k root-to-leaf paths in the ‘reasoning tree’. ‘TFIDF+DE’ is the combination of the 2-hop TF-IDF retrieval procedure and our Document Explorer. heuristics like the ‘Most Frequent Candidate’ in Table 3. To eliminate this bias and to test our system’s ability to conduct multi-hop reasoning using the context, we additionally evaluate our system on the masked version of MedHop, where every candidate expression is replaced randomly using 100 unique placeholder tokens so that models can only rely on the context to comprehend every candidate. Our model achieves 41.6% accuracy in this “masked” setting, outperforming all previously published works by a large margin. 4 Analysis In this section, we present a series of new analyses and comparisons in order to understand the contribution from each of our three modules and demonstrate their advantages over other corresponding baselines and heuristics. 4.1 Reasoning Chain Recovery Tests We compare our Document Explorer with two TFIDF-based document selectors for their ability to recover the reasoning chain of documents. The 1-hop TF-IDF selector selects the top k + 1 documents with the highest TF-IDF score w.r.t. the query subject. The 2-hop TF-IDF selector, as in Sec. 2.1, first selects the top-1 TF-IDF document w.r.t. the query subject and then selects the top k remaining documents based on the TF-IDF score with respect to the first selected document. Finally, we also compare to our final combination R@1 R@2 R@3 R@4 R@5 Random 39.9 51.4 60.2 67.8 73.5 1-hop TFIDF 38.4 48.5 58.6 67.4 73.7 2-hop TFIDF 38.4 58.7 70.2 77.2 81.6 DE 52.5 70.2 80.3 85.8 89.0 TFIDF+DE 52.2 69.0 77.8 82.2 85.2 Table 5: Recall-k score is the percentage of examples where the ground-truth answer is present in the top-k root-to-leaf path in the ‘reasoning tree’. ‘TFIDF+DE’ is the combination of the 2-hop TFIDF retrieval procedure and our Document Explorer. of 2-hop TF-IDF and Document Explorer. Human Evaluation: We collect humanannotated reasoning chains for 100 documents from the “follows + multiple” dev set, and compare these to the ‘reasoning tree’ constructed by our Document Explorer to assess its ability to discover the hidden reasoning chain from the entire pool of supporting documents. For each example, human annotators (external, English-speaking) select two of the smallest set of documents, from which they can reason to find the correct answer from the question. As shown in Table 4, our Document Explorer combined with 2-hop TF-IDF (row ‘TFIDF+DE’) obtains higher golden-chain recall scores compared to the two TFIDF-based document retrieval heuristics (row ‘1-hop TFIDF’ and ‘2-hop TFIDF’) alone or the Document Explorer without TF-IDF (row ‘DE’). Answer Span Test: We also test our Document Explorer’s ability to find the document with mentions of the ground-truth answer. Logically, the fact that the answer appears in one of the documents in the ‘reasoning tree’ signals higher probability that our modules at the following stages could predict the correct answer. As shown in Table 5, our Document Explorer receives significantly higher answer-span recall scores compared to the two TF-IDF-based document selectors.12 4.2 Answer Proposer Comparisons We compare our Answer Proposer with two rulebased sentence extraction heuristics for the ability to extract salient information from every reasoning chain. For most documents in the WikiHop dataset, the first sentence is comprised of the most salient information from that document. Hence, 12In this test, the Document Explorer alone outperforms its combination with the 2-hop TF-IDF retrieval. In practice, our system employs both procedures due to the advantage shown in both empirical results (Table 2) and analysis (Table 4). 2721 full follows follows + multiple + single Full-doc 63.1 68.4 69.0 Lead-1 63.6 68.7 70.2 AP w.o. attn 63.3 68.3 69.6 AP 64.7 69.4 70.6 Table 6: Answer Proposer comparison study. “Follows + multiple” and “follows + single” are the subsets of dev set as described in Sec. 3.1. full follows follows + multiple + single Single-chain 59.9 64.3 63.8 Avg-vote 54.6 56.3 55.6 Max-vote 51.5 53.9 53.3 w. Reranker 60.6 65.1 65.5 w. Assembler 64.7 69.4 70.6 Table 7: Evidence Assembler comparison study: Reranker (described in the appendix) rescores the documents selected by the Document Explorer. we construct one baseline that concatenates the first sentence from each selected document as the input to the Evidence Assembler. We also show results of combining all the full documents as the synthesized context instead of selecting one sentence from every document. We further present a lighter neural-model baseline that directly proposes the answer from the leaf document without first creating its ancestor-aware representation. As shown in Table 6, the system using sentences selected by our Answer Proposer outperforms both rule-based heuristics (row 1 and 2) and the simple neural baseline (row 3). 4.3 Assembler Ablations In order to justify our choice of building an Assembler, we build a 2-module system without the Evidence-Assembler stage by applying the Answer Proposer to only the top-1 reasoning chain in the tree. We also present two voting heuristics that selects the final answer by taking the average/maximum prediction probability from the Answer Proposer on all document chains. Furthermore, we compare our Evidence Assembler with an alternative model that, instead of assembling information from all reasoning chains, reranks all chains and their proposed answers to select the top-1 answer prediction. As shown in Table 7, the full system with the Assembler achieves significant improvements over the 2-module system. This demonstrates the importance of the Assembler in enabling information aggregation over multiple reasoning chains. The results further show that our Assembler is better than the reranking alternative. 4.4 Multi-hop Reasoning Example We visualize the 3-stage reasoning procedure of our EPAr system in Fig. 4. As shown in the left of Fig. 4, the Document Explorer first locates the root document (“The Polsterberg Pumphouse ...”) based on the query subject. It then finds three more documents that are related to the root document, constructing three document chains. The Answer Proposer proposes a candidate answer from each of the three chains selected by the Document Explorer. Finally, the Evidence Assembler selects key sentences from all documents in the constructed document chains and makes the final prediction (“Lower Saxony”). 5 Related Works The last few years have witnessed significant progress on text-based machine reading comprehension and question answering (MRC-QA) including cloze-style blank-filling tasks (Hermann et al., 2015), open-domain QA (Yang et al., 2015), answer span prediction (Rajpurkar et al., 2016, 2018), and generative QA (Nguyen et al., 2016). However, all of the above datasets are confined to a single-document context per question setup. Joshi et al. (2017) extended the task to the multidocument regime, with some examples requiring cross-sentence inference. Earlier attempts in multi-hop MRC focused on reasoning about the relations in a knowledge base (Jain, 2016; Zhou et al., 2018; Lin et al., 2018) or tables (Yin et al., 2015). QAngaroo WikiHop and MedHop (Welbl et al., 2017), on the other hand, are created as natural language MRC tasks. They are designed in a way such that the evidence required to answer a query could be spread across multiple documents. Thus, finding some evidence requires building a reasoning chain from the query with intermediate inference steps, which poses extra difficulty for MRC-QA systems. HotpotQA (Yang et al., 2018) is another recent multi-hop dataset which focuses on four different reasoning paradigms. The emergence of large-scale MRC datasets has led to innovative neural models such as coattention (Xiong et al., 2017), bi-directional attention flow (Seo et al., 2017), and gated attention (Dhingra et al., 2017), all of which are metic2722 a The Sperberhai Dyke is in fact an aqueduct which forms part of the Upper Harz Water Regale network of reservoirs, ditches, dams and tunnels ... The Polsterberg Pumphouse (German : Polsterberger Hubhaus) is a pumping station above the Dyke Ditch in the Upper Harz in central Germany which is used today as a forest restaurant. ... The Harz is the highest mountain range in Northern Germany and its rugged terrain extends across parts of Lower Saxony, Saxony-Anhalt, and Thuringia. ... The Dyke Ditch is the longest artificial ditch in the Upper Harzin central Germany. ... The Upper Harz refers to the northwestern and higher part of the Harz mountain range in Germany. ... Germany, officially the Federal Republic of Germany, is a federal parliamentary republic in central-western Europe. ... Query subject: Polsterberg Pumphouse Sewage is a water-carried waste, in solution or suspension, that is intended to be removed from a community. Wildemann is a town and a former municipality in the district of Goslar, in Lower Saxony, Germany. 1 2 2 2 Document Explorer Query body: located_in_the_administrative_territorial_entity b d c The Dyke Ditch is the longest artificial ditch in the Upper Harz in central Germany. Its purpose was to collect surface runoff for the operation of the Upper Harz mining industry from precipitation-heavy regions a long way away (particularly from the Bruchberg and parts of the Brocken massif). ... The Upper Harz refers to the northwestern and higher part of the Harz mountain range in Germany. ... the term Upper Harz covers the area of the seven historical mining towns - Clausthal, Zellerfeld, Andreasberg, Altenau, Lautenthal, Wildemann and Grund - in the present-day German federal state of Lower Saxony. ... The Harz is the highest mountain range in Northern Germany and its rugged terrain extends across parts of Lower Saxony, Saxony-Anhalt, and Thuringia. The name "Harz" derives from the Middle High German word "Hardt" or "Hart" (mountain forest), Latinized as "Hercynia". Answer Proposer Query subject: Polsterberg Pumphouse Query body: located_in_the_administrative_territorial_entity a. The Polsterberg Pumphouse (German : Polsterberger Hubhaus) is a pumping station above the Dyke Ditch in the Upper Harz in central Germany which is used today as a forest restaurant. b. The Harz is the highest mountain range in Northern Germany and its rugged terrain extends across parts of Lower Saxony, Saxony-Anhalt, and Thuringia. The Upper Harz refers to the northwestern and higher part of the Harz mountain range in Germany. c. In its traditional sense, the term Upper Harz covers the area of the seven historical mining towns - Clausthal, Zellerfeld, Andreasberg, Altenau, Lautenthal, Wildemann and Grund - in the present-day German federal state of Lower Saxony. d. The Dyke Ditch is the longest artificial ditch in the Upper Harz in central Germany. Its purpose was to collect surface runoff for the operation of the Upper Harz mining industry from precipitation-heavy regions a long way away (particularly from the Bruchberg and parts of the Brocken massif). Final answer: Lower Saxony Evidence Assembler Figure 4: An example of our 3-stage EPAr system exploring relevant documents, proposing candidate answers, and then assembling extracted evidence to make the final prediction. ulously designed to solve single-document MRC tasks. Clark and Gardner (2018) and Chen et al. (2017) used a simple TF-IDF based documentselection procedure to find the context that is most relevant to the query for multi-document QA. However, this 1-hop, similarity-based selection process would fail on multi-hop readingcomprehension datasets like WikiHop because the query subject and the answer could appear in different documents. On the other hand, our Document Explorer can discover the document with the answer “Loon op Zand” (in Fig. 1a) by iteratively selecting relevant documents and encoding the hinge words “Efteling” and “Kaatsheuvel” in its memory. Recently, Dhingra et al. (2018) leveraged coreference annotations from an external system to connect the entities. Song et al. (2018a) and De Cao et al. (2018) utilized Graph Convolutional Networks (Kipf and Welling, 2017) and Graph Recurrent Networks (Song et al., 2018b; Zhang et al., 2018) to model the relations between entities. Recently, Cao et al. (2019) extended the Graph Convolutional Network in De Cao et al. (2018) by introducing bi-directional attention between the entity graph and query. By connecting the entities, these models learn the inference paths for multihop reasoning. Our work differs in that our system learns the relation implicitly without the need of any human-annotated relation. Recently, Zhong et al. (2019) used hierarchies of co-attention and self-attention to combine evidence from multiple scattered documents. Our novel 3-module architecture is inspired by previous 2-module selection architectures for MRC (Choi et al., 2017). Similarly, Wang et al. (2018) first selected relevant content by ranking documents and then extracted the answer span. Min et al. (2018) selected relevant sentences from long documents in a singledocument setup and achieved faster speed and robustness against adversarial corruption. However, none of these models are built for multi-hop MRC where our EPAr system shows great effectiveness. 6 Conclusion We presented an interpretable 3-module, multihop, reading-comprehension system ‘EPAr’ which constructs a ‘reasoning tree’, proposes an answer candidate for every root-to-leaf chain, and merges key information from all reasoning chains to make the final prediction. On WikiHop, our system outperforms all published models on the dev set, and achieves results competitive with the current stateof-the-art on the test set. On MedHop, our system outperforms all previously published models on the leaderboard test set. We also presented multiple reasoning-chain recovery tests for the explainability of our system’s reasoning capabilities. 7 Acknowledgement We would like to thank Johannes Welbl for helping test our system on WikiHop and MedHop. We thank the reviewers for their helpful comments. This work was supported by DARPA (YFA17D17AP00022), Google Faculty Research Award, Bloomberg Data Science Research Grant, Salesforce Deep Learning Research Grant, Nvidia GPU awards, Amazon AWS, and Google Cloud Credits. The views contained in this article are those of the authors and not of the funding agency. 2723 References D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Third International Conference on Learning Representations. Yu Cao, Meng Fang, and Dacheng Tao. 2019. BAG: bidirectional attention entity graph convolutional network for multi-hop reasoning question answering. In NAACL-HLT. Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. 2016. Hierarchical memory networks. arXiv preprint arXiv:1605.07427. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In ACL. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In ACL. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. arXiv preprint arXiv:1808.09920. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017. Gatedattention readers for text comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1832–1846, Vancouver, Canada. Association for Computational Linguistics. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Sarthak Jain. 2016. Question answering over knowledge base using factual memory networks. In Proceedings of the NAACL Student Research Workshop. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In ICLR. Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In EMNLP. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1725–1735. Association for Computational Linguistics. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. Martin Raison, Pierre-Emmanuel Mazar´e, Rajarshi Das, and Antoine Bordes. 2018. Weaver: Deep coencoding of questions and documents for machine reading. arXiv preprint arXiv:1804.10490. P. Rajpurkar, R. Jia, and P. Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Association for Computational Linguistics (ACL). 2724 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations (ICLR). Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018a. Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks. arXiv preprint arXiv:1809.02040. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. A graph-to-sequence model for amrto-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616– 1626. Association for Computational Linguistics. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. In International Conference on Machine Learning (ICML). Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3: Reinforced ranker-reader for open-domain question answering. In AAAI. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural qa as simple as possible but not simpler. In CoNLL. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2017. Constructing datasets for multi-hop reading comprehension across documents. In TACL. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In ICLR. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013–2018. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint. Yue Zhang, Qi Liu, and Linfeng Song. 2018. Sentencestate lstm for text representation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL). Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. In ICLR. Mantong Zhou, Minlie Huang, and Xiaoyan Zhu. 2018. An interpretable reasoning network for multirelation question answering. In Proceedings of the 27th International Conference on Computational Linguistics. Appendix A Reranker We explore an alternative to Evidence Assembler (EA), where instead of selecting key sentences from every root-to-leaf path in the reasoning tree, we use a reranker to rescore the selected documents. Specifically, given a document reasoning-tree of tw reasoning chains, we use bidirectional attention (Seo et al., 2017) between the last documents in each chain and all the documents from the previous hops in that chain to obtain {ˆh1, · · · , ˆhtw} which are the refined representations of the leaf documents. We then obtain a fixed length document representation as the weighted average of word representations for each of the tw documents using similarity with query subject and query body as the weights using function α. We obtain the scores for each of the documents by computing similarity with the answer which that reasoning chain proposes using β. (See Sec. C below for details of the similarity functions α and β.) B Self-Attention We use self-attention from Zhong et al. (2019) to get the compact representation for all supporting documents. Given contextual word representations for the supporting documents H = {h1, h2, · · · , hN′} such that hi ∈RK×2v, we define Selfattn(hi) →pi ∈R2v as: aik = tanh(W2tanh(W1hk i + b1) + b2) ˆai = softmax(ai) pi = K X k=1 ˆaikhk i (3) such that pi provides the summary of the ith document with a vector representation. 2725 C Similarity Functions When constructing our 3-module system, we use similarity functions α and β. The function β is defined as: β(h, c) = Wβ1relu(Wβ2[h; u; h◦u]+bβ2)+bβ1 (4) where relu(x) = max(0, x), and ◦represents element-wise multiplication. And the function α is defined as: α(h, u) = Wα2 T ((Wα1h + bα1) ◦u) (5) where all trainable weights are marked in bold. D Datasets and Metrics We evaluate our 3-module system on QAngaroo (Welbl et al., 2017), which is a set of two multihop reading comprehension datasets: WikiHop and MedHop. WikiHop contains 51K instances, including 44K for training, 5K for development and 2.5K for held out testing. MedHop is a smaller dataset based on the domain of molecular biology. It consists of 1.6K instances for training, 342 for development, and 546 for held out testing. Each instance consists of a query (which can be separated as a query subject and a query body), a set of supporting documents and a list of candidate answers. For the WikiHop development set, each instance is also annotated as “follows” or “not follows”, which signifies whether the answer can be inferred from the given set of supporting documents, and “multiple” or “single”, which tells whether the complete reasoning chain comprises of multiple documents or just a single one. We measure our system’s performance on these subsets of the development set that are annotated as “follows and multiple” and “follows and single”. This allows us to evaluate our systems on a less noisy version of development set and to investigate their strength in queries requiring different levels of multi-hop reasoning behavior. E Implementation Details For Medhop, considering the small size of the dataset, we use 20-d hidden size of the encoding LSTM-RNN and the last hidden state of the encoding LSTM-RNN to get compact representation of the documents. We also use a hidden size of 20 for the embedded GRU cell and LSTM in our Evidence Assembler. In addition to that, since Welbl et al. (2017) show the poor performance of TF-IDF model we drop the TF-IDF document retrieval procedure and supervision at the first hop of the Document Explorer (with the document having highest TF-IDF score to query subject). We train all modules of our system jointly using Adam Optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.001 and a batch size of 10. We also use a dropout rate of 0.2 in all our linear projection layers, encoding LSTM-RNN and character CNNs.
2019
261
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2726–2736 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2726 Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA Yichen Jiang and Mohit Bansal UNC Chapel Hill {yichenj, mbansal}@cs.unc.edu Abstract Multi-hop question answering requires a model to connect multiple pieces of evidence scattered in a long context to answer the question. In this paper, we show that in the multihop HotpotQA (Yang et al., 2018) dataset, the examples often contain reasoning shortcuts through which models can directly locate the answer by word-matching the question with a sentence in the context. We demonstrate this issue by constructing adversarial documents that create contradicting answers to the shortcut but do not affect the validity of the original answer. The performance of strong baseline models drops significantly on our adversarial evaluation, indicating that they are indeed exploiting the shortcuts rather than performing multi-hop reasoning. After adversarial training, the baseline’s performance improves but is still limited on the adversarial evaluation. Hence, we use a control unit that dynamically attends to the question at different reasoning hops to guide the model’s multihop reasoning. We show that this 2-hop model trained on the regular data is more robust to the adversaries than the baseline model. After adversarial training, this 2-hop model not only achieves improvements over its counterpart trained on regular data, but also outperforms the adversarially-trained 1-hop baseline. We hope that these insights and initial improvements will motivate the development of new models that combine explicit compositional reasoning with adversarial training.1 1 Introduction The task of question answering (QA) requires the model to answer a natural language question by finding relevant information in a given natural language context. Most QA datasets require singlehop reasoning only, which means that the evidence 1Our code and data are publicly available at: https://github.com/jiangycTarheel/ Adversarial-MultiHopQA What was the father of Kasper Schmeichel voted to be by the IFFHS in 1992? R. Bolesław Kelly MBE (] ; born 18 November 1963) is a Danish former professional footballer who played as a Defender, and was voted the IFFHS World's Best Defender in 1992 and 1993. Kasper Peter Schmeichel (] ; born 5 November 1986) is a Danish professional footballer who plays as a goalkeeper ... . He is the son of former Manchester United and Danish international goalkeeper Peter Schmeichel. Edson Arantes do Nascimento (] ; born 23 October 1940), known as Pelé (] ), is a retired Brazilian professional footballer who played as a forward. In 1999, he was voted World Player of the Century by IFFHS. Peter Bolesław Schmeichel MBE (] ; born 18 November 1963) is a Danish former professional footballer who played as a goalkeeper, and was voted the IFFHS World's Best Goalkeeper in 1992 and 1993. Kasper Hvidt (born 6 February 1976 in Copenhagen) is a Danish retired handball goalkeeper, who lastly played for KIF Kolding and previous Danish national team. ... Hvidt was also voted as Goalkeeper of the Year March 20, 2009, second place was Thierry Omeyer ... Prediction: World's Best Goalkeeper (correct) Question Golden Reasoning Chain Docs Distractor Docs Adversarial Doc Prediction under adversary: IFFHS World's Best Defender Figure 1: HotpotQA example with a reasoning shortcut, and our adversarial document that eliminates this shortcut to necessitate multi-hop reasoning. necessary to answer the question is concentrated in a single sentence or located closely in a single paragraph (Q: “What’s the color of the sky?”, Context: “The sky is blue.”, Answer: “Blue”). Such datasets emphasize the role of matching and aligning information between the question and the context (“sky→sky, color→blue”). Previous works have shown that models with strong questionaware context representation (Seo et al., 2017; Xiong et al., 2017) can achieve super-human performance on single-hop QA tasks like SQuAD (Rajpurkar et al., 2016, 2018). Recently, several multi-hop QA datasets, such 2727 as QAngaroo (Welbl et al., 2017) and HotpotQA (Yang et al., 2018), have been proposed to further assess QA systems’ ability to perform composite reasoning. In this setting, the information required to answer the question is scattered in the long context and the model has to connect multiple evidence pieces to pinpoint to the final answer. Fig. 1 shows an example from the HotpotQA dev set, where it is necessary to consider information in two documents to infer the hidden reasoning chain “Kasper Schemeichel son of −−−−→Peter Schemeichel voted as −−−−−→World’s Best Goalkeeper” that leads to the final answer. However, in this example, one may also arrive at the correct answer by matching a few keywords in the question (“voted, IFFHS, in 1992”) with the corresponding fact in the context without reasoning through the first hop to find “father of Kasper Schmeichel”, as neither of the two distractor documents contains sufficient distracting information about another person “voted as something by IFFHS in 1992”. Therefore, a model performing well on the existing evaluation does not necessarily suggest its strong compositional reasoning ability. To truly promote and evaluate a model’s ability to perform multi-hop reasoning, there should be no such “reasoning shortcut” where the model can locate the answer with single-hop reasoning only. This is a common pitfall when collecting multi-hop examples and is difficult to address properly. In this work, we improve the original HotpotQA distractor setting2 by adversarially generating better distractor documents that make it necessary to perform multi-hop reasoning in order to find the correct answer. As shown in Fig. 1, we apply phrase-level perturbations to the answer span and the titles in the supporting documents to create the adversary with a new title and a fake answer to confuse the model. With the adversary added to the context, it is no longer possible to locate the correct answer with the single-hop shortcut, which now leads to two possible answers (“World’s Best Goalkeeper” and “World’s Best Defender”). We evaluate the strong “Bi-attention + Self-attention” model (Seo et al., 2017; Wang et al., 2017) from Yang et al. (2018) on our constructed adversarial dev set (adv-dev), and find that its EM score drops significantly. In the example in Fig. 1, the 2HotpotQA has a fullwiki setting as an open-domain QA task. In this work, we focus on the distractor setting as it provides a less noisy environment to study machine reasoning. model is confused by our adversary and predicts the wrong answer (“World’s Best Defender”). Our experiments further reveal that when strong supervision of the supporting facts that contain the evidence is applied, the baseline achieves a significantly higher score on the adversarial dev set. This is because the strong supervision encourages the model to not only locate the answer but also find the evidence that completes the first reasoning hop and hence promotes robust multi-hop reasoning behavior from the model. We then train the baseline with supporting fact supervision on our generated adversarial training set (adv-train) and observe significant improvement on adv-dev. However, the result is still poor compared to the model’s performance on the regular dev set because this single-hop model is not well-designed to perform multi-hop reasoning. To motivate and analyze some new multi-hop reasoning models, we propose an initial architecture by incorporating the recurrent control unit from Hudson and Manning (2018), which dynamically computes a distribution over question words at each reasoning hop to guide the multi-hop biattention. In this way, the model can learn to put the focus on “father of Kasper Schmeichel” at the first step and then attend to “voted by IFFHS in 1992” in the second step to complete this 2hop reasoning chain. When trained on the regular data, this 2-hop model outperforms the singlehop baseline in the adversarial evaluation, indicating improved robustness against adversaries. Furthermore, this 2-hop model, with or without supporting-fact supervision, can benefit from adversarial training and achieve better performance on adv-dev compared to the counterpart trained with the regular training set, while also outperforming the adversarially-trained baseline. Overall, we hope that these insights and initial improvements will motivate the development of new models that combine explicit compositional reasoning with adversarial training. 2 Adversarial Evaluation 2.1 The HotpotQA Task The HotpotQA dataset (Yang et al., 2018) is composed of 113k human-crafted questions, each of which can be answered with facts from two Wikipedia articles. During the construction of the dataset, the crowd workers are asked to come up with questions requiring reasoning about two 2728 Supporting Doc 1: Sachin Warrier Sachin Warrier is a playback singer and composer in the Malayalam cinema industry from Kerala. He became notable with the song "Muthuchippi Poloru" from the film Thattathin Marayathu. He made his debut with the movie Malarvaadi Arts Club. He was working as a software engineer in Tata Consultancy Services in Kochi. Later he resigned from the job to concentrate more on music. His latest work is as a composer for the movie Aanandam. Supporting Doc 2: Tata Consultancy Services Tata Consultancy Services Limited (TCS) is an Indian multinational information technology (IT) service, consulting and business solutions company Headquartered in Mumbai, Maharashtra. It is a subsidiary of the Tata Group and operates in 46 countries. Question: Where is the company that Sachin Warrier worked for as a software engineer headquartered? Answer: Mumbai Model's prediction: Mumbai Adversarial Doc: Valencia Street Circuit Limited is an Indian multinational information technology (IT) service, consulting and business solutions company Headquartered in Delhi, Maharashtra. It is a subsidiary of the Valencia Group and operates in 46 countries. Model's prediction: Delhi (Step 3) Substitue all titles and answers tokens Original answer: Mumbai Advesarial answer: Delhi (Step 1) Generate fake answer Title: Tata Consultancy Services New title: Valencia Street Circuit (Step 2) Sample title Figure 2: An illustration of our ADDDOC procedure. In this example, the keyword “headquarter” appears in no distractor documents. Thus the reader can easily infer the answer by looking for this keyword in the context. given documents. Yang et al. (2018) then select the top-8 documents from Wikipedia with the shortest bigram TF-IDF (Chen et al., 2017) distance to the question as the distractors to form the context with a total of 10 documents. Since the crowd workers are not provided with distractor documents when generating the question, there is no guarantee that both supporting documents are necessary to infer the answer given the entire context. The multi-hop assumption can be broken by incompetent distractor documents in two ways. First, one of the selected distractors may contain all required evidence to infer the answer (e.g., “The father of Kasper Schmeichel was voted the IFFHS World’s Best Goalkeeper in 1992.”). Empirically, we find no such cases in HotpotQA, as Wiki article about one subject rarely discusses details of another subject. Second, the entire pool of distractor documents may not contain the information to truly distract the reader/model. As shown in Fig. 1, one can directly locate the answer by looking for a few keywords in the question (“voted, IFFHS, in 1992”) without actually discovering the intended 2-hop reasoning path. We call this pattern of bypassing the first reasoning hop the “reasoning shortcut”, and we find such shortcuts exist frequently in the non-comparisontype examples in HotpotQA.3 We randomly sample 50 “bridge-type” questions in the dev set, and found that 26 of them contain this kind of reasoning shortcut. 2.2 Adversary Construction To investigate whether neural models exploit reasoning shortcuts instead of exploring the desired reasoning path, we adapt the original examples in HotpotQA to eliminate these shortcuts. Given a context-question-answer tuple (C, q, a) that may contain a reasoning shortcut, the objective is to produce (C′, q, a) such that (1) a is still the valid answer to the new tuple, (2) C′ is close to the original example, and (3) there is no reasoning shortcut that leads to a single answer. In HotpotQA, there is a subset of 2 supporting documents P ⊂C that contains all evidence needed to infer the answer. To achieve this, we propose an adversary ADDDOC (illustrated in Fig. 2) that constructs documents P ′ to get (ξ(C, P ′), q, a) where ξ is a function that mixes the context and adversaries. 3HotpotQA also includes a subset of comparison questions (e.g.,“Are Leo and Kate of the same age?”) that make up to 21% of total examples in the dev set. These questions can’t be answered without aggregating information from multiple documents, as shortcuts like “Leo is one-year older than Kate” rarely exist in Wikipedia articles. Therefore, we simply leave these examples unchanged in our adversarial data. 2729 Suppose p2 ∈P is a document containing the answer a and p1 ∈P is the other supporting document.4 ADDDOC applies a word/phrase-level perturbation to p2 so that the generated p′ 2 contains a fake answer that satisfies the reasoning shortcut but does not contradict the answer to the entire question (e.g., the adversarial document in Fig. 2). First, for every non-stopword in the answer, we find the substitute within the top-10 closest words in GloVe (Pennington et al., 2014) 100-d vector space that doesn’t have an overlapping substring longer than 3 with the original answer (“Mumbai →Delhi, Goalkeeper →Defender”). If this procedure fails, we randomly sample a candidate from the entire pool of answers in the HotpotQA dev set (e.g., “Rome” for Fig. 2 or “defence of the Cathedral” for Fig. 1). We then replace the original answer in p2 with our generated answer to get p′ 2. If the original answer spans multiple words, we substitute one non-stopword in the answer with the corresponding sampled answer word to create the fake answer (“World’s Best Goalkeeper → World’s Best Defender”) and replace all mentions of the original answer in p′ 2. The resulting paragraph p′ 2 provides an answer that satisfies the reasoning shortcut, but also contradicts the real answer to the entire question as it forms another valid reasoning chain connecting the question to the fake answer (“Sachin Warrier workAt −−−−→TCS at −→Delhi”). To break this contradicting reasoning chain, we need to replace the bridge entity that connects the two pieces of evidence (“Tata Consultancy Services” in this case) with another entity so that the generated answer no longer serves as a valid answer to the question. We replace the title of p′ 2 with a candidate randomly sampled from all document titles in the HotpotQA dev set. If the title of p1 appears in p′ 2, we also replace it with another sampled title to entirely eliminate the connection between p′ 2 and p1. Empirically, we find that the title of either p1 or p2 serves as the bridge entity in most examples. Note that it is possible that models trained on our adversarial data could simply learn new reasoning shortcuts in these adversaries by ignoring adversarial documents with randomly-sampled titles, because these titles never appear in any other document in the context. Hence, to eliminate this bias in ti4|P| = 2 in HotpotQA. If both documents in P contain the answer, we apply ADDDOC twice while alternating the choice of p1 and p2 tle occurrence, for each adversarial document, we additionally find another document from the entire dev set that contains the exact title of our adversarial document and add it to the context.5 Every new document added to the context replaces an original non-supporting document so that the total number of documents in context remains unchanged. Note that ADDDOC adversaries are model-independent, which means that they require no access to the model or any training data, similar to the ADDONESENT in Jia and Liang (2017). 3 Models 3.1 Encoding We first describe the pre-processing and encoding steps. We use a Highway Network (Srivastava et al., 2015) of dimension v, which merges the character embedding and GloVe word embedding (Pennington et al., 2014), to get the word representations for the context and the question as x ∈RJ×v and q ∈RS×v where J and S are the lengths of the context and question. We then apply a bi-directional LSTM-RNN (Hochreiter and Schmidhuber, 1997) of d hidden units to get the contextualized word representations for the context and question: h = BiLSTM(x); u = BiLSTM(q) so that h ∈RJ×2d and u ∈RS×2d. 3.2 Single-Hop Baseline We use the bi-attention + self-attention model (Yang et al., 2018; Clark and Gardner, 2018), which is a strong near-state-of-the-art6 model on HotpotQA. Given the contextualized encoding h, u for the context and question, BiAttn(h, u) (Seo et al., 2017; Xiong et al., 2017) first computes a similarity matrix MS×J between every question and context word and use it to derive context-to-query attention: Ms,j = W1us + W2hj + W3(us ⊙hj) ps,j = exp(Ms,j) PS s=1 exp(Ms,j) cqj = S X s=1 ps,jus (1) 5Empirically, we find that our models trained on the adversarial data without this final title-balancing step do not seem to be exploiting this new shortcut, because they still perform equally well on the title-balanced adversarial evaluation. However, we keep this final title-balancing step in our adversary-generation procedure so as to prevent future model families from exploiting this title shortcut. 6At the time of submission: March 3rd, 2019. 2730 RNN RNN question bi-attention RNN RNN self-attention bi-attention Word Emb Char Emb context Word Emb Char Emb Query2Context Attention Softmax W,b Previous Control W,b W,b Control Unit Contextualized word emb question vector Context2Query and Query2Context Attention Softmax Context2Query Attention Bridge-entity Supervision RNN Start index RNN End index Figure 3: A 2-hop bi-attention model with a control unit. The Context2Query attention is modeled as in Seo et al. (2017). The output distribution cv of the control unit is used to bias the Query2Context attention. where W1, W2 and W3 are trainable parameters, and ⊙is element-wise multiplication. Then the query-to-context attention vector is derived as: mj = max1≤s≤S Ms,j pj = exp(mj) PJ j=1 exp(mj) qc = J X j=1 pjhj (2) We then obtain the question-aware context representation and pass it through another layer of BiLSTM: h′j = [hj; cqj; hj ⊙cqj; cqj ⊙qc] h1 = BiLSTM(h′) (3) where ; is concatenation. Self-attention is modeled upon h1 as BiAttn(h1, h1) to produce h2. Then, we apply linear projection to h2 to get the start index logits for span prediction and the end index logits is modeled as h3 = BiLSTM(h2) followed by linear projection. Furthermore, the model uses a 3-way classifier on h3 to predict the answer as “yes”, “no”, or a text span. The model is additionally supervised to predict the sentence-level supporting fact by applying a binary classifier to every sentence on h2 after self-attention. 3.3 Compositional Attention over Question To present some initial model insights for future community research, we try to improve the model’s ability to perform composite reasoning using a recurrent control unit (Hudson and Manning, 2018) that computes a distribution-overword on the question at each hop. Intuitively, the control unit imitates human’s behavior when answering a question that requires multiple reasoning steps. For the example in Fig. 1, a human reader would first look for the name of “Kasper Schmeichel’s father”. Then s/he can locate the correct answer by finding what “Peter Schmeichel” (the answer to the first reasoning hop) was “voted to be by the IFFHS in 1992”. Recall that S, J are the lengths of the question and context. At each hop i, given the recurrent control state ci−1, contextualized question representation u, and question’s vector representation q, the control unit outputs a distribution cv over all words in the question and updates the state ci: cqi = Proj[ci−1; q]; cai,s = Proj(cqi ⊙us) cvis = softmax(cais); ci = S X s=1 cvi,s · us (4) where Proj is the linear projection layer. The distribution cv tells which part of the question is related to the current reasoning hop. Then we use cv and ci to bias the BiAttn described in the single-hop baseline. Specifically, we use h ⊙ci to replace h in Eqn. 1, Eqn. 2, and Eqn. 3. Moreover, after we compute the similarity matrix M between question and context words as in Eqn. 1, instead of max-pooling M on the question dimension (as done in the single-hop bi-attention), we calculate the distribution over J context words as: 2731 m′ j = cv · M pj = exp(m′ j) PJ j=1 exp(m′ j) qc = J X j=1 pjhj (5) The query-to-context attention vector qc is applied to the following computation in Eqn. 3 to get the query-aware context representation. Here, with the output distribution from the control unit, qc represents the context information that is most relevant to the sub-question of the current reasoning hop, as opposed to encoding the context most related to any question word in the original bi-attention. Overall, this model (illustrated in Fig. 3) combines the control unit from the stateof-the-art multi-hop VQA model and the widelyadopted bi-attention mechanism from text-based QA to perform composite reasoning on the context and question. Bridge Entity Supervision However, even with the multi-hop architecture to capture a hopspecific distribution over the question, there is no supervision on the control unit’s output distribution cv about which part of the question is important to the current reasoning step, thus preventing the control unit from learning the composite reasoning skill. To address this problem, we look for the bridge entity (defined in Sec. 2.2) that connects the two supporting documents. We supervise the main model to predict the bridge entity span (“Tata Consultancy Services” in Fig. 2) after the first biattention layer, which indirectly encourages the control unit to look for question information related to this entity (“company that Sachin Warrier worked for as a software engineer”) at the first hop. For examples with the answer appearing in both supporting documents,7 the intermediate supervision is given as the answer appearing in the first supporting document, while the answer in the second supporting document serves as the answerprediction supervision. 4 Experimental Setup Adversarial Evaluation and Training For all the adversarial analysis in this paper, we construct four adversarial dev sets with different numbers of adversarial documents per supporting document 7This mostly happens for questions requiring checking multiple facts of an entity. Train Reg Reg Adv Adv Eval Reg Adv Reg Adv 1-hop Base 42.32 26.67 41.55 37.65 1-hop Base + sp 43.12 34.00 45.12 44.65 2-hop 47.68 34.71 45.71 40.72 2-hop + sp 46.41 32.30 47.08 46.87 Table 1: EM scores after training on the regular data or on the adversarial training set ADD4DOCS-RAND, and evaluation on the regular dev set or the ADD4DOCSRAND adv-dev set. “1-hop Base” and ”2-hop” do not have sentence-level supporting-facts supervision. containing answer (4 or 8) and mixing strategy (randomly insert or prepend). We name these 4 dev sets “Add4Docs-Rand”, “Add4Docs-Prep”, “Add8Docs-Rand”, and “Add8Docs-Prep”. For adversarial training, we choose the “Add4DocsRand” training set since it is shown in Wang and Bansal (2018) that training with randomly inserted adversaries yields the model that is the most robust to the various adversarial evaluation settings. In the adversarial training examples, the fake titles and answers are sampled from the original training set. We randomly select 40% of the adversarial examples and add them to the regular training set to build our adversarial training set. Dataset and Metrics We use the HotpotQA (Yang et al., 2018) dataset’s distractor setting. We show EM scores rather than F1 scores because our generated fake answer usually has word-overlap with the original answer, but the overall result trends and take-away’s are the same even for F1 scores. Training Details We use 300-d pre-trained GloVe word embedding (Pennington et al., 2014) and 80-d encoding LSTM-RNNs. The control unit of the 2-hop model has an 128-d internal state. We train the models using Adam (Kingma and Ba, 2014) optimizer, with an initial learning rate of 0.001. We keep exponential moving averages of all trainable variables in our models and use them during the evaluation. 5 Results Regularly-Trained Models In our main experiment, we compare four models’ performance on the regular HotpotQA and Add4Docs-Rand dev sets, when trained on two different training sets (regular or adversarial), respectively. The first two columns in Table 1 show the result of models trained on the regular training set only. As shown 2732 A4D-R A4D-P A8D-R A8D-P 1-hop Base 37.65 37.72 34.14 34.84 1-hop Base + sp 44.65 44.51 43.42 43.59 2-hop 40.72 41.03 37.26 37.70 2-hop + sp 46.87 47.14 44.28 44.44 Table 2: EM scores on 4 adversarial evaluation settings after training on ADD4DOCS-RAND. ‘-R’ and ‘-P’ represent random insertion and prepending. A4D and A8D stands for ADD4DOCS and ADD8DOCS advdev sets. in the first row, the single-hop baseline trained on regular data performs poorly on the adversarial evaluation, suggesting that it is indeed exploiting the reasoning shortcuts instead of actually performing the multi-hop reasoning in locating the answer. After we add the supporting fact supervision (2nd row in Table 1), we observe a significant improvement8 (p < 0.001) on the adversarial evaluation, compared to the baseline without this strong supervision. However, this score is still more than 9 points lower than the model’s performance on the regular evaluation. Next, the 2-hop bi-attention model with the control unit obtains a higher EM score than the baseline in the adversarial evaluation, demonstrating better robustness against the adversaries. After this 2-hop model is additionally supervised to predict the sentencelevel supporting facts, the performance in both regular and adversarial evaluation decreases a bit, but still outperforms both baselines in the regular evaluation (with stat. significance). One possible explanation for this performance drop is that the 2-hop model without the extra task of predicting supporting facts overfits to the task of the final answer prediction, thus achieving higher scores. Adversarially-Trained Models We further train all four models with the adversarial training set, and the results are shown in the last two columns in Table 1. Comparing the numbers horizontally, we observe that after adversarial training, both the baselines and the 2-hop models with control unit gained statistically significant9 improvement on the adversarial evaluations. Comparing the numbers in Table 1 vertically, we show that the 2-hop model (row 3) achieves significantly (p-value < 0.001) better results than the baseline (row 1) on both regular and adver8All stat. signif. is based on bootstrapped randomization test with 100K samples (Efron and Tibshirani, 1994). 9Statistical significance of p < 0.01. Train Regular Regular Adv Adv Eval Regular Adv Regular Adv 2-hop 47.68 34.71 45.71 40.72 2-hop - Ctrl 46.12 32.46 45.20 40.32 2-hop - Bridge 43.31 31.80 41.90 37.37 1-hop Base 42.32 26.67 41.55 37.65 Table 3: Ablation for the Control unit and Bridge-entity supervision, reported as EM scores after training on the regular or adversarial ADD4DOCS-RAND data, and evaluation on regular dev set and ADD4DOCS-RAND adv-dev set. Note that 1-hop Base is same as 2-hop without both control unit and bridge-entity supervision. sarial evaluation. After we add the sentence-level supporting-fact supervision, the 2-hop model (row 4) obtains further improvements in both regular and adversarial evaluation. Overall, we hope that these initial improvements will motivate the development of new models that combine explicit compositional reasoning with adversarial training. Adversary Ablation In order to test the robustness of the adversarially-trained models against new adversaries, we additionally evaluate them on dev sets with varying numbers of adversarial documents and a different adversary placement strategy elaborated in Sec. 4. As shown in the first two columns in Table 2, neither the baselines nor the 2-hop models are affected when the adversarial documents are pre-pended to the context. When the number of adversarial documents per supporting document with answer is increased to eight, all four models’ performance drops by more than 1 points, but again the 2-hop model, with or without supporting-fact supervision, continues to outperform its single-hop counterpart. Control Unit Ablation We also conduct an ablation study on the 2-hop model by removing the control unit. As shown in the first two rows of Table 3, the model with the control unit outperforms the alternative in all 4 settings with different training and evaluation data combinations. The results validate our intuition that the control unit can improve the model’s multi-hop reasoning ability and robustness against adversarial documents. Bridge-Entity Supervision Ablation We further investigate how intermediate supervision of finding the bridge entity affects the overall performance. For this ablation, we also construct another 2-hop model without the bridge-entity super2733 vision, using 2 unshared layers of bi-attention (2hop - Bridge), as opposed to our previous model with 2 parallel, shared layers of bi-attention. As shown in Table 3, both the 2-hop and 1-hop models without the bridge-entity supervision suffer large drops in the EM scores, suggesting that intermediate supervision is important for the model to learn the compositional reasoning behavior. 6 Analysis In this section, we seek to understand the behavior of the model under the influence of the adversarial examples. Following Jia and Liang (2017), we focus on examples where the model predicted the correct answer on the regular dev set. This portion of the examples is divided into “model-successes” — where the model continues to predict the correct answer given the adversarial documents, and “model-failures” — where the model makes the wrong prediction on the adversarial example. Manual Verification of Adversaries We first verify that the adversarial documents do not contradict the original answer. As elaborated in Sec. 2.2, we assume that the bridge entity is the title of a supporting document and substitute it with another title sampled from the training/dev set. Thus, the contradiction could arise when the adversarial document p′ 2 is linked with p1 with another entity other than the titles. We randomly sample 50 examples in ADD4DOCS-RAND, and find 0 example where the fake answers in the adversarial docs contradict the original answer. This shows that our adversary construction is effective in breaking the logical connection between the supporting documents and adversaries. Model Error Analysis Next, we try to understand the model’s false prediction in the “modelfailures” subset on ADD4DOCS-RAND. For the 1-hop Baseline trained on regular data (2nd row, 2nd column in Table 1), in 96.3% of the failures, the model’s prediction spans at least one of the adversarial documents. For the same baseline trained with adversarial data, the model’s prediction spans at least one adversarial document in 95.4% of the failures. We further found that in some examples, the span predicted on the adversarial data is much longer than the span predicted on the original dev set, sometimes starting from a word in one document and ending several documents later. This is because our models predict the start and end indexes separately, and thus could be affected by different adversarial documents in the context. Adversary Failure Analysis Finally, we investigate those “model-successes”, where the adversarial examples fail to fool the model. Specifically, we find that some questions can be answered with a single document. For the question “Who produced the film that was Jennifer Kent’s directorial debut?”, one supporting document states “The Babadook is ... directed by Jennifer Kent in her directorial debut, and produced by Kristina Tarbell and Kristian Corneille.” In this situation, even an adversary is unable to change the single-hop nature of the question. We refer to the appendix for the full example. Toward Better Multi-Hop QA Datasets Lastly, we provide some intuition that is of importance for future attempts in collecting multi-hop questions. In general, the final sub-question of a multi-hop question should not be over-specific, so as to avoid large semantic match between the question and the surrounding context of the answer. Compared to the question in Fig. 1, it is harder to find a shortcut for the question “What government position was held by the woman who portrayed Corliss Archer in ...” because the final sub-question (“What government position”) contains less information for the model to directly exploit, and it is more possible that a distracting document breaks the reasoning shortcut by mentioning another government position held by a person. 7 Related Works Multi-hop Reading Comprehension The last few years have witnessed significant progress on large-scale QA datasets including cloze-style blank-filling tasks (Hermann et al., 2015), opendomain QA (Yang et al., 2015), QA with answer span prediction (Rajpurkar et al., 2016, 2018), and generative QA (Nguyen et al., 2016). However, all of the above datasets are confined to a singledocument context per question domain. Earlier attempts in multi-hop QA focused on reasoning about the relations in a knowledge base (Jain, 2016; Zhou et al., 2018; Lin et al., 2018) or tables (Yin et al., 2015). The bAbI dataset (Weston et al., 2016) uses synthetic contextx and requires the model to combine multiple pieces of evidence in the text-based context. 2734 TriviaQA (Joshi et al., 2017) includes a small portion of questions that require cross-sentence inference. Welbl et al. (2017) uses Wikipedia articles as the context and subject-relation pairs as the query, and construct the multi-hop QAngaroo dataset by traversing a directed bipartite graph. It is designed in a way such that the evidence required to answer a query could be spread across multiple documents that are not directly related to the query. HotpotQA (Yang et al., 2018) is a more recent multi-hop dataset that has crowd-sourced questions with diverse syntactic and semantic features. HotpotQA and QAngaroo also differ in their types of multi-hop reasoning covered. Because of the knowledge-base domain and the triplet format used in the construction, QAngaroo’s questions usually require inferring the desired property of a query subject by finding a bridge entity that connects the query to the answer. HotpotQA includes three more types of question, each requiring a different reasoning paradigm. Some examples require inferring the bridge entity from the question (Type I in Yang et al. (2018)), while others demand checking facts or comparing subjects’ properties from two different documents (Type II and comparison question). Adversarial Evaluation and Training Jia and Liang (2017) first applied adversarial evaluation to QA models on the SQuAD (Rajpurkar et al., 2016) dataset by generating a sentence that only resembles the question syntactically and appending it to the paragraph. They report that the performances of state-of-the-art QA models (Seo et al., 2017; Hu et al., 2018; Huang et al., 2018) drop significantly when evaluated on the adversarial data. Wang and Bansal (2018) further improves the AddSent adversary and proposed AddSentDiverse that employs a diverse vocabulary for the question conversion procedure. They show that models trained with such adversarial examples can be robust against a wide range of adversarial evaluation samples. Our paper shares the spirit with these two works as we also try to investigate models’ over-stability to semantics-altering perturbations. However, our study also differs from the previous works (Jia and Liang, 2017; Wang and Bansal, 2018) in two points. First, we generate adversarial documents by replacing the answer and bridge entities in the supporting documents instead of converting the question into a statement. Second, our adversarial documents still preserve words with common semantic meaning to the question so that it can distract models that are exploiting the reasoning shortcut in the context. 8 Conclusion In this work, we identified reasoning shortcuts in the HotpotQA dataset where the model can locate the answer without multi-hop reasoning. We constructed adversarial documents that can fool the models exploiting the shortcut, and found that the performance of a state-of-the-art model dropped significantly under our adversarial examples. We showed that this baseline can improve on the adversarial evaluation after being trained on the adversarial data. We next proposed to use a control unit that dynamically attends to the question to guide the bi-attention in multi-hop reasoning. Trained on the regular data, this 2-hop model is more robust against the adversary than the baseline; and after being trained with adversarial data, this model achieved further improvements on the adversarial evaluation and also outperforms the baseline. Overall, we hope that these insights and initial improvements will motivate the development of new models that combine explicit compositional reasoning with adversarial training. 9 Acknowledgement We thank the reviewers for their helpful comments. This work was supported by DARPA (YFA17-D17AP00022), Google Faculty Research Award, Bloomberg Data Science Research Grant, Salesforce Deep Learning Research Grant, Nvidia GPU awards, Amazon AWS, and Google Cloud Credits. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. References Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2735 Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehension. In IJCAI. Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2018. Fusionnet: Fusing via fullyaware attention with application to machine comprehension. In ICLR. Drew A Hudson and Christopher D Manning. 2018. Compositional attention networks for machine reasoning. In Proceedings of ICLR. Sarthak Jain. 2016. Question answering over knowledge base using factual memory networks. In Proceedings of the NAACL Student Research Workshop. Association for Computational Linguistics. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR. Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In EMNLP. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing (EMNLP). P. Rajpurkar, R. Jia, and P. Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Association for Computational Linguistics (ACL). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations (ICLR). Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. In International Conference on Machine Learning (ICML). Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Yicheng Wang and Mohit Bansal. 2018. Robust machine comprehension models via adversarial training. In NAACL. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2017. Constructing datasets for multi-hop reading comprehension across documents. In TACL. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. In ICLR. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In ICLR. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013–2018. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint. Mantong Zhou, Minlie Huang, and Xiaoyan Zhu. 2018. An interpretable reasoning network for multirelation question answering. In Proceedings of the 27th International Conference on Computational Linguistics. 2736 Who produced the film that was Jennifer Kent's directorial debut? The Aphra Behn is a 2014 Australian psychological horror film written and directed by Scott Hahn in her directorial debut, and produced by Kristina Mutrux and Kristian ionesco. ... Jennifer Kent is an Australian actress, writer and director, best known for her horror film "The Babadook" (2014), which was her directorial debut. She is currently filming her second film, "The Nightingale". You Can't Kill Stephen King is a 2012 American comedy horror film that was directed by Monroe Mann, Ronnie Khalil, and Jorge Valdés-Iga, and is the directorial debut of Khalil and the feature film directorial debut of Mann ... The Babadook is a 2014 Australian psychological horror film written and directed by Jennifer Kent in her directorial debut, and produced byKristina Ceyton and Kristian Moliere. The film stars Essie Davis, Noah Wiseman, Daniel Henshall, Hayley McElhinney, Barbara West, and Ben Winspear.', ' It is based on the 2005 short film "Monster", also written and directed by Kent. The Iron Giant is a 1999 American animated sciencefiction comedy-drama action film using both traditional animation and computer animation, produced by and directed by Brad Bird in his directorial debut. Prediction: Kristina Ceyton and Kristian Moliere Question Golden Reasoning Chain Docs Distractor Docs Adversarial Doc Prediction under adversary: Kristina Ceyton and Kristian Moliere Figure 4: A single-hop HotpotQA example that cannot be fixed with our adversary. Appendix A Examples We show an HotpotQA (Yang et al., 2018) example of our adversarial documents fail to fool the model into predicting the fake answer. As shown in Fig. 4, the question can be directly answered by the second document in the Golden Reasoning Chain. Therefore, it is logically impossible to create an adversarial document to break this singlehop situation without introducing contradiction.
2019
262
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2737–2747 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2737 Exploiting Explicit Paths for Multi-hop Reading Comprehension Souvik Kundu†∗and Tushar Khot‡ and Ashish Sabharwal‡ and Peter Clark‡ †Department of Computer Science, National University of Singapore ‡Allen Institute for Artificial Intelligence, Seattle, WA, U.S.A. [email protected], {tushark,ashishs,peterc}@allenai.org Abstract We propose a novel, path-based reasoning approach for the multi-hop reading comprehension task where a system needs to combine facts from multiple passages to answer a question. Although inspired by multi-hop reasoning over knowledge graphs, our proposed approach operates directly over unstructured text. It generates potential paths through passages and scores them without any direct path supervision. The proposed model, named PathNet, attempts to extract implicit relations from text through entity pair representations, and compose them to encode each path. To capture additional context, PathNet also composes the passage representations along each path to compute a passage-based representation. Unlike previous approaches, our model is then able to explain its reasoning via these explicit paths through the passages. We show that our approach outperforms prior models on the multi-hop Wikihop dataset, and also can be generalized to apply to the OpenBookQA dataset, matching stateof-the-art performance. 1 Introduction Many reading comprehension (RC) datasets (Rajpurkar et al., 2016; Trischler et al., 2017; Joshi et al., 2017) have been proposed recently to evaluate a system’s ability to answer a question from a given text passage. However, most of the questions in these datasets can be answered by using only a single sentence or passage. As a result, systems designed for these tasks may not be able to compose knowledge from multiple sentences or passages, a key aspect of natural language understanding. To remedy this, new datasets (Weston et al., 2015; Welbl et al., 2018; Khashabi et al., 2018a; Mihaylov et al., 2018) have been proposed, ∗Work performed while doing an internship at the Allen Institute for Artificial Intelligence. Query: (always breaking my heart, record label, ?) Supporting Passages: (p1) “Always Breaking My Heart” is the second single from Belinda Carlisle’s A Woman and a Man album , released in 1996 ( see 1996 in music ) . It ... (p2) A Woman and a Man is the sixth studio album by American singer Belinda Carlisle, released in the United Kingdom on September 23, 1996 by Chrysalis Records (then part of the EMI Group, ... Candidates: chrysalis records, emi group, virgin records, ... Answer: chrysalis records Paths: (“Always Breaking My Heart” ... single from ... A Woman and a Man) (A Woman and a Man ... released ... by ... Chrysalis Records) Figure 1: Example illustrating our proposed path extraction and reasoning approach. requiring a system to combine information from multiple sentences in order to arrive at the answer, referred to as multi-hop reasoning. Multi-hop reasoning has been studied for question answering (QA) over structured knowledge graphs (Lao et al., 2011; Guu et al., 2015; Das et al., 2017). Many of the successful models explicitly identify paths in the knowledge graph that led to the answer. A strength of these models is high interpretability, arising from explicit pathbased reasoning over the underlying graph structure. However, they cannot be directly applied to QA in the absence of such structure. Consequently, most multi-hop RC models over unstructured text (Dhingra et al., 2017; Hu et al., 2018) extend standard attention-based models from RC by iteratively updating the attention to indirectly “hop” over different parts of the text. Recently, graph-based models (Song et al., 2018; Cao et al., 2018) have been proposed for the WikiHop dataset (Welbl et al., 2018). Nevertheless, these models still only implicitly combine knowl2738 edge from all passages, and are therefore unable to provide explicit reasoning paths. We propose an approach1 for multiple choice RC that explicitly extracts potential paths from text (without direct path supervision) and encodes the knowledge captured by each path. Figure 1 shows how to apply this approach to an example in the WikiHop dataset. It shows two sample paths connecting an entity in the question (Always Breaking My Heart) to a candidate answer (Chrysalis Records) through a singer (Belinda Carlisle) and an album (A Woman and a Man). To encode the path, our model, named PathNet, first aims to extract implicit (latent) relations between entity pairs in a passage based on their contextual representations. For example, it aims to extract the implicit single from relation between the song and the name of the album in the first passage. Similarly, it extracts the released by relation between the album and the record label in the second passage. It learns to compose the extracted implicit relations such that they map to the main relation in the query, in this case record label. In essence, the motivation is to learn to extract implicit relations from text and to identify their valid compositions, such as: (x, single from, y), (y, released by, z) →(x, record label, z). Due to the absence of direct supervision on these relations, PathNet does not explicitly extract these relations. However, our qualitative analysis on a sampled set of instances from WikiHop development set shows that the top scoring paths in 78% of the correctly answered questions have implied relations in the text that could be composed to derive the query relations. In addition, PathNet also learns to compose aggregated passage representations in a path to capture more global information: encoding(p1), encoding(p2) →(x, record label, z). This passagebased representation is especially useful in domains such as science question answering where the lack of easily identifiable entities limits the effectiveness of the entity-based path representation. While this passage-based representation is less interpretable than the entity-based path representation, it still identifies the two passages used to select the answer, compared to a spread out attention over all documents produced by previous graph1The source code is available at https://github. com/allenai/PathNet based approaches. We make three main contributions: (1) A novel path-based reasoning approach for multi-hop QA over text that produces explanations in the form of explicit paths; (2) A model, PathNet, which aims to extract implicit relations from text and compose them; and (3) Outperforming prior models on the target WikiHop dataset2 and generalizing to the open-domain science QA dataset, OpenBookQA, with performance comparable to prior models. 2 Related Work We summarize related work in QA over text, semistructured knowledge, and knowledge graphs. Multi-hop RC. Recent datasets such as bAbI (Weston et al., 2015), Multi-RC (Khashabi et al., 2018a), WikiHop (Welbl et al., 2018), and OpenBookQA (Mihaylov et al., 2018) have encouraged research in multi-hop QA over text. The resulting multi-hop models can be categorized into state-based and graph-based reasoning models. State-based reasoning models (Dhingra et al., 2017; Shen et al., 2017; Hu et al., 2018) are closer to a standard attention-based RC model with an additional “state” representation that is iteratively updated. The changing state representation results in the model focusing on different parts of the passage during each iteration, allowing it to combine information from different parts of the passage. Graph-based reasoning models (Dhingra et al., 2018; Cao et al., 2018; Song et al., 2018), on the other hand, create graphs over entities within the passages and update entity representations via recurrent or convolutional networks. In contrast, our approach explicitly identifies paths connecting entities in the question to the answer choices. Semi-structured QA. Our model is closer to Integer Linear Programming (ILP) based methods (Khashabi et al., 2016; Khot et al., 2017; Khashabi et al., 2018b), which define an ILP program to find optimal support graphs for connecting the question to the choices through a semi-structured knowledge representation. However, these models require a manually authored and tuned ILP program, and need to convert text into a semi-structured representation—a process that is often noisy (such as using Open IE tu2Other systems, such as by Zhong et al. (2019), have recently appeared on the WikiHop leaderboard (http:// qangaroo.cs.ucl.ac.uk/leaderboard.html). 2739 ples (Khot et al., 2017), SRL frames (Khashabi et al., 2018b)). Our model, on the other hand, is trained end-to-end, and discover relevant relational structure from text. Instead of an ILP program, Jansen et al. (2017) train a latent ranking perceptron using features from aggregated syntactic structures from multiple sentences. However, their system operates at the detailed (and often noisy) level of dependency graphs, whereas we identify entities and let the model learn implicit relations and their compositions. Knowledge Graph QA. QA datasets on knowledge graphs such as Freebase (Bollacker et al., 2008), require systems to map queries to a single relation (Bordes et al., 2015), a path (Guu et al., 2015), or complex structured queries (Berant et al., 2013) over these graphs. While early models (Lao et al., 2011; Gardner and Mitchell, 2015) focused on creating path-based features, recent neural models (Guu et al., 2015; Das et al., 2017; Toutanova et al., 2016) encode the entities and relations along a path and compose them using recurrent networks. Importantly, the input knowledge graphs have entities and relations that are shared across all training and test examples, which the model can exploit during learning (e.g., via learned entity and relation embeddings). When reasoning with text, our model must learn these representations purely based on their local context. 3 Approach Overview We focus on the multiple-choice RC setting: given a question and a set of passages, the task is to find the correct answer among a predefined set of candidates. The proposed approach can be applied to m-hop reasoning, as discussed briefly in the corresponding sections for path extraction, encoding, and scoring. Since our target datasets primarily need 2-hop reasoning3 and the potential of semantic drift with increased number of hops (Fried et al., 2015; Khashabi et al., 2019), we focus on and assess the case of 2-hop paths (m = 2). As discussed later (see Footnote 4), our path-extraction step scales exponentially with m. Using m = 2 keeps this step tractable, while still covering almost all examples in our target datasets. In WikiHop, a question Q is given in the form of a tuple (he, r, ?), where he represents the head en3We found that most WikiHop questions can be answered with 2 hops and OpenBookQA also targets 2-hop questions. tity and r represents the relation between he and the unknown tail entity. The task is to select the unknown tail entity from a given set of candidates {c1, c2, . . . cN}, by reasoning over supporting passages P = p1, . . . , pM. To perform multi-hop reasoning, we extract multiple paths P (cf. Section 4) connecting he to each ck from the supporting passages P. The j-th 2-hop path for candidate ck is denoted pkj, where pkj = he →e1 →ck, and e1 is referred to as the intermediate entity. In OpenBookQA, different from WikiHop, the questions and candidate answer choices are plain text sentences. To construct paths, we extract all head entities from the question and tail entities from candidate answer choices, considering all noun phrases and named entities as entities. This often results in many 2-hop paths connecting a question to a candidate answer choice via the same intermediate entity. With {he1, he2, . . .} representing the list of head entities from a question, and {ck1, ck2, . . .} the list of tail entities from candidate ck, the j-th path connecting ckα to heβ can be represented as: pα,β kj = heα →e1 →ckβ. For simplicity, we omit the notations α and β from path representation. Next, the extracted paths are encoded and scored (cf. Section 5). Following, the normalized path scores are summed for each candidate to give a probability distribution over the candidate answer choices. 4 Path Extraction The first step in our approach is extracting paths from text passages. Consider the example in Figure 1. Path extraction proceeds as follows: (a) We find a passage p1 that contains a head entity he from the question Q. In our example, we would identify the first supporting passage that contains always breaking my heart. (b) We then find all named entities and noun phrases that appear in the same sentence as he or in the subsequent sentence. Here, we would collect Belinda Carlisle, A Woman and a Man, and album as potential intermediate entity e1. (c) Next, we find a passage p2 that contains the potential intermediate entity identified above. For clarity, we refer to the occurrence of e1 in p2 as e1′. By design, (he, e1) and (e1′, ck) are located in different passages. For instance, we find the second passage that contains both Belinda Carlisle and A Woman and a Man. 2740 (d) Finally, we check whether p2 contains any of the candidate answer choices. For instance, p2 contains chrysalis records and emi group. The resulting extracted paths can be summarized as a set of entity sequences. In this case, for the candidate answer chrysalis records, we obtain a set of two paths: (always breaking my heart →Belinda Carlisle →chrysalis records), (always breaking my heart →A Man and a Woman → chrysalis records). Similarly, we can collect paths for the other candidate, emi group. Notably, our path extraction method can be easily extended for more hops. Specifically, for mhop reasoning, steps (b) and (c) are repeated (m− 1) times, where the intermediate entity from step (c) becomes the head entity for the subsequent step (b). For larger values of m, maintaining tractability of this approach would require optimizing the complexity of identifying the passages containing an entity (steps (a) and (c)) and limiting the number of neighboring entities considered (step (b)).4 For one hop reasoning, i.e., when a single passage is sufficient to answer a question, we construct the path with e1 as null. In this case, both he and ck are found in a single passage. In this way, for a task requiring more hops, one only need to guess the maximum number of hops. If some questions in that task require less hops, our proposed approach can easily handle that by assigning the intermediate entity to null. For instance, in this work, our approach can handle 1-hop reasoning although it is developed for 2-hop. 5 PathNet: Path-based Multi-hop QA Model Once we have all potential paths, we score them using the proposed model, named PathNet, whose overview is depicted in Figure 2. The key component is the path-scorer module that computes the score for each path pkj. We normalize these scores across all paths, and compute the probability of a candidate ck being the correct answer by summing the normalized scores of the paths associated with ck: prob(ck) = X j score(pkj). (1) Next, we describe three main model components, operating on the following inputs: question 4If the search step takes no more than s steps and identifies a fixed number k of passages, and we select up to e neighboring entities, our approach would have a time complexity of O (ke)m−1sm for enumerating m-hop paths. Figure 2: Architecture of the proposed model. Q, passages p1 and p2, candidate ck, and the locations of he, e1, e′ 1, ck in these passages: (1) Embedding and Encoding (§ 5.1) (2) Path Encoding (§ 5.2) (3) Path Scoring (§ 5.3). In Figure 3, we present the model architecture for these three components used for scoring the paths. 5.1 Embedding and Encoding We start by describing how we embed and contextually encode all pieces of text: question, supporting passages, and candidate answer choices. For word embedding, we use pretrained 300 dimensional vectors from GloVe (Pennington et al., 2014), randomly initializing vectors for out of vocabulary (OOV) words. For contextual encoding, we use bi-directional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997). Let T, U, and V represent the number of tokens in the p-th supporting passage, question, and k-th answer candidate, respectively. The final encoded representation for the p-th supporting passage can be obtained by stacking these vectors into Sp ∈RT×H, where H is the number of hidden units for the BiLSTMs. The sequence level encoding for the question, Q ∈RU×H, and for the k-th candidate answer, Ck ∈RV ×H, are obtained similarly. We use row vector representation (e.g., R1×H) for all vectors in this paper. 5.2 Path Encoding After extracting the paths as discussed in Section 4, they are encoded using an end-to-end neural network. This path encoder consists of two components: context-based and passage-based. 2741 Figure 3: Architecture of the path scoring module, shown here for 2-hop paths. 5.2.1 Context-based Path Encoding This component aims to implicitly encode the relation between he and e1, and between e1′ and ck. These implicit relation representations are them composed together to encode a path representation for he →e1 . . . e1′ →ck. First, we extract the contextual representations for each of he, e1, e1′, and ck. Based on the locations of these entities in the corresponding passages, we extract the boundary vectors from the passage encoding representation. For instance, if he appears in the p-th supporting passage from token i1 to i2 (i1 ≤i2), then the contextual encoding of he, ghe ∈R2H is taken to be: ghe = sp1,i1 || sp1,i2, where || denotes the concatenation operation. If he appears in multiple locations within the passage, we use the mean vector representation across all of these locations. The location encoding vectors ge1, ge1′, and gck are obtained similarly. Next, we extract the implicit relation between he and e1 as rhe,e1 ∈RH, using a feed forward layer: rhe,e1 = FFL(ghe, ge1) , (2) where FFL is defined as: FFL(a, b) = tanh(aWa + bWb) . (3) Here a ∈RH′ and b ∈RH′′ are input vectors, and Wa ∈RH′×H and Wb ∈RH′′×H are trainable weight matrices. The bias vectors are not shown here for simplicity. Similarly, we compute the implicit relation between e1′ and ck as re1′,ck ∈RH, using their location encoding vectors ge1′ and gck. Finally, we compose all implicit relation vectors along the path to obtain a context-based path representation xctx ∈RH given by: xctx = comp(rhe,e1 , re1′,ck) (4) For fixed length paths, we can use a feed forward network as the composition function. E.g., for 2-hop paths, we use FFL(rhe,e1 , re1′,ck). For variable length paths, we can use recurrent composition networks such as LSTM, GRU. We compare these composition functions in Section 6.3. 5.2.2 Passage-based Path Encoding In this encoder, we use entire passages to compute the path representation. As before, suppose (he, e1) and (e1′, ck) appear in supporting passages p1 and p2, respectively. We encode each of p1 and p2 into a single vector based on passage-question interaction. As discussed below, we first compute a question-weighted representation for passage tokens and then aggregate it across the passage. Question-Weighted Passage Representation: For the p-th passage, we first compute the attention matrix A ∈RT×U, capturing the similarity between the passage and question words. Then, we calculate a question-aware passage representation Sq1 p ∈RT×H, where Sq1 p = AQ. Similarly, a passage-aware question representation, Qp ∈ RU×H, is computed, where Qp = A⊤Sp. Further, we compute another passage representation Sq2 p = AQp ∈RT×H. Intuitively, Sq1 p captures important passage words based on the question, whereas Sq2 p is another passage representation which focuses on the interaction with passage-relevant question words. The idea of encoding a passage after interacting with the question multiple times is inspired from the Gated Attention Reader model (Dhingra et al., 2017). 2742 Aggregate Passage Representation: To derive a single passage vector, we first concatenate the two passage representations for each token, obtaining Sq p = Sq1 p || Sq2 p ∈RT×2H. We then use an attentive pooling mechanism for aggregating the token representations. The aggregated vector ˜sp ∈R2H for the p-th passage is obtained as: ap t ∝exp(sq p,tw⊤); ˜sp = apSq p (5) where w ∈R2H is a learned vector. In this way, we obtain the aggregated vector representations for both supporting passages p1 and p2 as ˜sp1 ∈R2H and ˜sp2 ∈R2H, respectively. Composition: We compose the aggregated passage vectors to obtain the passage-based path representation xpsg ∈RH similar to Equation 4: xpsg = comp(˜sp1 , ˜sp2) (6) Similar to the composition function in contextbased path encoding, this composition function can be a feed-forward network for fixed length or recurrent networks for variable length paths. 5.3 Path Scoring Encoded paths are scored from two perspectives. Context-based Path Scoring: We score context-based paths based on their interaction with the question encoding. First, we aggregate the question into a single vector. We take the first and last hidden state representations from the question encoding Q to obtain an aggregated question vector representation. The aggregated question vector ˜q ∈RH is ˜q = (q0 || qU) Wq , (7) where Wq ∈R2H×H is a learnable weight matrix. The combined representation yxctx,q ∈RH of the question and a context-based path is computed as: yxctx,q = FFL(xctx , ˜q) Finally, we derive scores for context-based paths: zctx = yxctx,qw⊤ ctx , (8) where wctx ∈RH is a trainable vector. Passage-based Path Scoring: We also score paths based on the interaction between the passage-based path encoding vector and the candidate encoding. In this case, only candidate encoding is used since passage-based path encoding already uses the question representation. We aggregate the representation Ck for candidate ck into a single vector ˜ck ∈RH by applying an attentive pooling operation similar to Equation 5. The score for passage-based path is then computed as follows: zpsg = ˜ck x⊤ psg (9) Finally, the unnormalized score for path pkj is: z = zctx + zpsg (10) and its normalized version, score(pkj), is calculated by applying the softmax operation over all the paths and candidate answers. 6 Experiments We start by describing the experimental setup, and then present results and an analysis of our model. 6.1 Setup We consider the standard (unmasked) version of the recently proposed WikiHop dataset (Welbl et al., 2018). WikiHop is a large scale multihop QA dataset consisting of about 51K questions (5129 Dev, 2451 Test). Each question is associated with an average of 13.7 supporting Wikipedia passages, each with 36.4 tokens on average. We also evaluate our model on OpenBookQA (Mihaylov et al., 2018), a very recent and challenging multi-hop QA dataset with about 6K questions (500 Dev, 500 Test), each with 4 candidate answer choices. Since OpenBookQA does not have associated passages for the questions, we retrieve sentences from a text corpus to create single sentence passages. We start with a corpus of 1.5M sentences used by previous systems (Khot et al., 2017) for science QA. It is then filtered down to 590K sentences by identifying sentences about generalities and removing noise. We assume sentences that start with a plural noun are likely to capture general concepts, e.g. “Mammals have fur”, and only consider such sentences. We also eliminate noisy and irrelevant sentences by using a few rules such as root of the parse tree must be a sentence, it must not contain proper nouns. This corpus is also provided along with our code. Next, we need to retrieve sentences that can lead to paths between the question q and an answer choice c. Doing so naively will only retrieve sentences that directly connect entities in q to c, 2743 Model Accuracy (%) Dev Test Welbl et al. (2018) 42.9 Dhingra et al. (2018) 56.0 59.3 Song et al. (2018) 62.8 65.4 Cao et al. (2018) 64.8 67.6 PathNet 67.4† 69.6† Table 1: Accuracy on the WikiHop dataset. †Statistically significant (Wilson, 1927) Model Accuracy (%) Dev Test KER (OMCS) 54.4 52.2 KER (WordNet) 55.6 51.4 KER (OB + OMCS) 54.6 50.8 KER (OB + WordNet) 54.2 51.2 KER (OB + Text) 55.4 52.0 PathNet (OB + Text) 55.0 53.4 Table 2: Accuracy on the OpenBookQA dataset. i.e., 1-hop paths. To facilitate 2-hop reasoning, we first retrieve sentences based on words in q, and for each retrieved sentence s1, we find sentences that overlap with both s1 and c. Each path is scored using idf(q, s1)·idf(s1, s2)·idf(s2, c), where s2 is the second retrieved sentence and idf(w) is the idf score of token w based on the input corpus: idf(x, y) = P w∈x∩y idf(w) min(P w∈x idf(w), P w∈y idf(w)) For efficiency, we perform beam search and ignore any chain if the score drops below a threshold (0.08). Finally we take the top 100 chains and use these sentences as passages in our model. We use Spacy5 for tokenization. For word embedding, we use the 840B 300-dimensional pretrained word vectors from GloVe and we do not update them during training. For simplicity, we do not use any character embedding. The number of hidden units in all LSTMs is 50 (H = 100). We use dropout (Srivastava et al., 2014) with probability 0.25 for every learnable layer. During training, the minibatch size is fixed at 8. We use the Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001 and clipnorm 5. We use cross entropy loss for training. This being a multiple-choice QA task, we use accuracy as the evaluation metric. 6.2 Main Results Table 1 compares our results on the WikiHop dataset with several recently proposed multi-hop 5https://spacy.io/api/tokenizer QA models. We show the best results from each of the competing entries. Welbl et al. (2018) presented the results of BiDAF (Seo et al., 2017) on the WikiHop dataset. Dhingra et al. (2018) incorporated coreference connections inside GRU network to capture coreference links while obtaining the contextual representation. Recently, Cao et al. (2018) and Song et al. (2018) proposed graph neural network approaches for multi-hop reading comprehension. While the high level idea is similar for these work, Cao et al. (2018) used ELMo (Peters et al., 2018) for a contextual embedding, which has proven to be very useful in the recent past in many NLP tasks. As seen in Table 1, our proposed model PathNet significantly outperforms prior approaches on WikiHop. Additionally, we benefit from interpretability: unlike these prior methods, our model allows identifying specific entity chains that led to the predicted answer. Table 2 presents results on the OpenBookQA dataset. We compare with the Knowledge Enhanced Reader (KER) model (Mihaylov et al., 2018). The variants reflect the source from which the model retrieves relevant knowledge: the open book (OB), WordNet subset of ConceptNet, and Open Mind Common Sense (OMCS) subset of ConceptNet, and the corpus of 590K sentences (Text). Since KER does not scale to a corpus of this size, we provided it with the combined set of sentences retrieved by our model for all the OpenBookQA questions. The model computes various cross-attentions between the question, knowledge, and answer choices, and combines these attentions to select the answer. Overall, our proposed approach marginally improved over the previous models on the OpenBookQA dataset6. Note that, our model was designed for the closed-domain setting where all the required knowledge is provided. Yet, our model is able to generalize on the open-domain setting where the retrieved knowledge may be noisy or insufficient to answer the question. 6.3 Effectiveness of Model Components Table 3 shows the impact of context-based and passage-based path encodings. Performance of the model degrades when we ablate either of 6Sun et al. (2018) used the large OpenAI fine-tuned language model (Radford et al., 2018) pre-trained on an additional dataset, RACE (Lai et al., 2017) to achieve a score of 55% on this task. 2744 the two path encoding modules. Intuitively, in context-based path encodings, limited and more fine-grained context is considered due to the use of specific entity locations. On the contrary, the passage-based path encoder computes the path representations considering the entire passage representations (both passages which contain the head entity and tail entity respectively). As a result, even if the intermediate entity can not be used meaningfully, the model poses the ability to form an implicit path representation. Passage-based path encoder is more helpful on OpenBookQA as it is often difficult to find meaningful explicit context-based paths through entity linking across passages. Let us consider the following example taken from OpenBookQA development set where our model successfully predicted the correct answer. Question: What happens when someone on top of a bicycle starts pushing it ’s peddles in a circular motion ? Answer: the bike accelerates Best Path: (bicycle, pedal, bike) p1: bicycles require continuous circular motion on pedals p2: pushing on the pedals of a bike cause that bike to move. In this case, the extracted path through entity linking is not meaningful as the path composition would connect bicycles to bike 7. However, when the entire passages are considered, they contain sufficient information to help infer the answer. Table 4 presents the results on WikiHop development set when different composition functions are used for Equation (4). Recurrent networks, such as LSTM and GRU, enable the path encoder to model an arbitrary number of hops. For 2-hop paths, we found that a simple feed forward network (FFL) performs slightly better than the rest. We also considered sharing the weights (FFL shared) when obtaining the relation vectors rhe,e1 and re1′,ck. Technically, the FFL model is performing the same task in both cases: extracting implicit relations and the parameters could be shared. However, practically, the unshared 7Entities in science questions can be phrases and events (e.g., “the bike accelerates”). Identifying and matching such entities are very challenging in case of the OpenBookQA dataset. We show that our entity-linking approach, designed for noun phrases and named entities, is still able to perform comparable to state-of-the-art methods on science question answering, despite this noisy entity matching. Model % Accuracy (∆) WikiHop OBQA PathNet 67.4† 55.0† - context-based path 64.7 (2.7) 54.8∗(0.2) - passage-based path 63.2 (4.2) 46.2 (8.8) Table 3: Ablation results on development sets. ∗Improvement over this is not statistically significant. Model Accuracy (%) WikiHop ∆ FFL (PathNet) 67.4 FFL Shared 66.7 0.7 LSTM 67.1 0.3 GRU 67.3 0.1 Table 4: Various composition functions to generate path representation (xctx) on WikiHop development set. weights perform better, possibly because it gives the model the freedom to handle answer candidates differently, especially allowing the model to consider the likelihood of a candidate being a valid answer to any question, akin to a prior. 6.4 Qualitative Analysis One key aspect of our model is its ability to indicate the paths that contribute most towards predicting an answer choice. Table 5 illustrates the two highest-scoring paths for two sample WikiHop questions which lead to correct answer prediction. In the first question, the top-2 paths are formed by connecting Zoo Lake to Gauteng through the intermediate entities Johannesburg and South Africa, respectively. In the second example, the science fiction novel This Day All Gods Die is connected to the publisher Bantam Books through the author Stephen R. Donaldson, and the collection Gap Cycle for first and second paths, respectively. We also analyzed 50 randomly chosen questions that are annotated as requiring multi-hop reasoning in the WikiHop development set and that our model answered correctly. In 78% of the questions, we found at least one meaningful path8 in the top-3 extracted paths, which dropped to 62% for top-1 path. On average, 66% of the top-3 paths returned by our model were meaningful. In contrast, only 46% of three randomly selected paths per question made sense, even when limited to the paths for the correct answers. That is, a random baseline, even with oracle knowledge of the correct answer, would only find a good path in 46% 8A path is considered meaningful if it has valid relations that can be composed to conclude the predicted answer. 2745 Question: (zoo lake, located in the administrative territorial entity, ?) Answer: gauteng Rank-1 Path: (zoo lake, Johannesburg, gauteng) Passage1: ... Zoo Lake is a popular lake and public park in Johannesburg , South Africa . It is part of the Hermann Eckstein Park and is ... Passage2: ... Johannesburg ( also known as Jozi , Joburg and eGoli ) is the largest city in South Africa and is one of the 50 largest urban areas in the world . It is the provincial capital of Gauteng , which is ... Rank-2 Path: (zoo lake, South Africa, gauteng) Passage1: ... Zoo Lake is a popular lake and public park in Johannesburg , South Africa . It is ... Passage2: ... aka The Reef , is a 56-kilometre - long north - facing scarp in the Gauteng Province of South Africa . It consists of a ... Question: (this day all gods die, publisher, ?) Answer: bantam books Rank-1 Path: (this day all gods die, Stephen R. Donaldson, bantam books) Passage1: ... All Gods Die , officially The Gap into Ruin : This Day All Gods Die , is a science fiction novel by Stephen R. Donaldson , being the final book of The Gap Cycle ... Passage2: ... The Gap Cycle ( published 19911996 by Bantam Books and reprinted by Gollancz in 2008 ) is a science fiction story , told in a series of 5 books , written by Stephen R. Donaldson . It is an ... Rank-2 Path: (this day all gods die, Gap Cycle, bantam books) Passage1: ... All Gods Die , officially The Gap into Ruin : This Day All Gods Die , is a science fiction novel by Stephen R. Donaldson , being the final book of The Gap Cycle ... Passage2: ... The Gap Cycle ( published 19911996 by Bantam Books and reprinted by Gollancz in 2008 ) is a science fiction story ... Table 5: Two top-scoring paths for sample WikiHop Dev questions. In the Rank-1 path for the first question, the model composes the implicit located in relations between (Zoo lake, Johannesburg) and (Johannesburg, Gauteng). of the cases. We also analyzed 50 questions that our model gets wrong. The top-scoring paths here were of lower quality (only 16.7% were meaningful). This provides qualitative evidence that our model’s performance is correlated with the quality of the paths it identifies, and it does not simply guess using auxiliary information such as entity types, number of paths,9 etc. 7 Conclusion We present a novel, path-based, multi-hop reading comprehension model that outperforms previous models on WikiHop and OpenBookQA. Importantly, we illustrate how our model can explain its reasoning via explicit paths extracted across multiple passages. While we focused on 2-hop reasoning required by our evaluation datasets, the approach can be generalized to longer chains and to longer natural language questions. Acknowledgment We thank Johannes Welbl for helping us evaluating our model on the WikiHop test set. We thank Rodney Kinney, Brandon Stilson and Tal Friedman for helping to produce the clean corpus used for OpenBookQA. We thank Dirk Groeneveld for 9A model that returns the answer with the highest number of paths would score only 18.5% on the WikiHop development set. retrieving the sentences from this corpus using the 2-hop retrieval. Computations on beaker.org were supported in part by credits from Google Cloud. References Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of EMNLP. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of ACM SIGMOD international conference on Management of data. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. In NIPS. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. CoRR, abs/1808.09920. Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. 2017. Chains of reasoning over entities, relations, and text using recurrent neural networks. In Proceedings of EACL. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In Proceedings of NAACL. 2746 Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017. Gatedattention readers for text comprehension. In Proceedings of ACL. Daniel Fried, Peter Jansen, Gustave Hahn-Powell, Mihai Surdeanu, and Peter Clark. 2015. Higherorder lexical semantic models for non-factoid answer reranking. TACL, 3:197–210. Matt Gardner and Tom M. Mitchell. 2015. Efficient and expressive knowledge base completion using subgraph feature extraction. In Proceedings of EMNLP. Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of EMNLP. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehension. In Proceedings of IJCAI. Peter Jansen, Rebecca Sharp, Mihai Surdeanu, and Peter Clark. 2017. Framing qa as building and ranking intersentence answer justifications. Computational Linguistics, 43:407–449. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of ACL. Daniel Khashabi, Erfan Sadeqi Azer, Tushar Khot, Ashutosh Sabharwal, and Dan Roth. 2019. On the capabilities and limitations of reasoning for natural language understanding. CoRR, abs/1901.02522. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018a. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of NAACL. Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming over semistructured knowledge. In Proceedings of IJCAI. Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2018b. Question answering as global reasoning over semantic abstractions. In Proceedings of AAAI. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2017. Answering complex questions using open information extraction. In Proceedings of ACL. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. In EMNLP. Ni Lao, Tom M. Mitchell, and William W. Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In Proceedings of EMNLP. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? A new dataset for open book question answering. In Proceedings of EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, Technical report, OpenAI. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of EMNLP. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of ICLR. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of KDD. Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018. Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks. CoRR, abs/1809.02040. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2018. Improving machine reading comprehension with general reading strategies. CoRR, abs/1810.13441. Kristina Toutanova, Victoria Lin, Wen tau Yih, Hoifung Poon, and Chris Quirk. 2016. Compositional learning of embeddings for relation paths in knowledge base and text. In Proceedings of ACL. 2747 Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. TACL, 6:287–302. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards AI-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698. Edwin B. Wilson. 1927. Probable inference, the law of succession, and statistical inference. JASA, 22(158):209–212. Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. In Proceedings of ICLR.
2019
263
Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts Elizabeth Clark1∗ Asli Celikyilmaz2 Noah A. Smith1,3 1Paul G. Allen School of Computer Science & Engineering, University of Washington 2Microsoft Research 3Allen Institute for Artificial Intelligence {eaclark7,nasmith}@cs.washington.edu [email protected] Abstract For evaluating machine-generated texts, automatic methods hold the promise of avoiding collection of human judgments, which can be expensive and time-consuming. The most common automatic metrics, like BLEU and ROUGE, depend on exact word matching, an inflexible approach for measuring semantic similarity. We introduce methods based on sentence mover’s similarity; our automatic metrics evaluate text in a continuous space using word and sentence embeddings. We find that sentence-based metrics correlate with human judgments significantly better than ROUGE, both on machine-generated summaries (average length of 3.4 sentences) and human-authored essays (average length of 7.5). We also show that sentence mover’s similarity can be used as a reward when learning a generation model via reinforcement learning; we present both automatic and human evaluations of summaries learned in this way, finding that our approach outperforms ROUGE. 1 Introduction Automatic text evaluation reduces the need for human evaluations, which can be expensive and time-consuming to collect, particularly when evaluating long, multi-sentence texts. Automatic metrics allow faster measures of progress when training and testing models and easier development of text generation systems. However, existing automatic metrics for evaluating text are problematic. Due to their computational efficiency, metrics based on word-matching are common, such as ROUGE (Lin, 2004) for summarization, BLEU (Papineni et al., 2002) for machine translation, and METEOR (Banerjee and Lavie, 2005) or CIDER (Vedantam et al., 2015) for image captioning. Nevertheless, these metrics of∗Work done while author was at Microsoft Research. The children eat lunch and play in the park. The family is on a picnic. They have fun. A: B: 3.7 6.3 5.1 6.2 7.6 5.5 6.1 5.1 S+WMS: 5.13 Figure 1: An illustration of S+WMS (a sentence mover similarity metric that uses both word and sentence embeddings) between two documents. This metric finds the minimal cost of “moving” both the word embeddings (orange) and the sentence embeddings (blue) in Document A to those in Document B. An arrow’s width is the proportion of the embedding’s weight being moved, and its label is the Euclidean distance. Here we show only the highest weighted connections. ten fail to capture information that has been reworded or reordered from the reference text, as shown in Kilickaya et al. (2017) and Table 1.1 They have also been found to correlate weakly with human judgments (Liu et al., 2016; Novikova et al., 2017). To avoid these shortcomings, word mover’s distance (WMD; Kusner et al., 2015) can be used to evaluate text in a continuous space using pretrained word embeddings instead of relying on exact word matching. WMD has been used successfully for tasks including image caption evaluation (Kilickaya et al., 2017), automatic essay evaluation (Tashu and Horv´ath, 2018), and affect detection (Alshahrani et al., 2017). This bag-ofembeddings approach is flexible but fails to reflect the grouping of words and ideas, a shortcoming that becomes more problematic as the length of the document grows. We modify WMD for evaluating multi-sentence texts by basing the score on sentence embeddings (§3), giving it access to higher-level representa1For readability, we scale ROUGE scores by a factor of 100 and sentence mover’s metrics by a factor of 1000. Reference passage. the only thing crazier than a guy in snowbound massachusetts boxing up the powdery white stuff and offering it for sale online ? people are actually buying it . for $ 89 , self-styled entrepreneur kyle waring will ship you 6 pounds of boston-area snow in an insulated styrofoam box – enough for 10 to 15 snowballs , he says . Summary ROUGE-L WMS SMS S+WMS Human summary. a man in suburban boston is selling snow online to customers in warmer states . for $ 89 , he will ship 6 pounds of snow in an insulated styrofoam box . 39.30 57.85 99.98 24.06 Word order. in suburban boston , a man is selling snow online to customers in warmer states . he will ship 6 pounds of snow in an insulated styrofoam box for $ 89 . 31.44 57.85 99.98 24.06 (↓20%) (=) (=) (=) Repetition. a man in suburban boston is selling snow is selling snow online to customers in warmer states in warmer states . for $ 89 , he will ship he will ship 6 pounds 6 pounds of snow in an insulated styrofoam box in a styrofoam box . 35.07 57.31 89.40 22.81 (↓11%) (↓1%) (↓11%) (↓5%) Table 1: A comparison of scores for three different summaries for a reference passage (the first lines of a news article). The human summary has been permuted with its clauses rearranged (Word order) and repeated (Repetition). Word order changes negatively affect ROUGE-L more than repetition; the other metrics are unaffected by word order choices but, to varying degrees, penalize repetition. tions of the text. We introduce two new metrics: sentence mover’s similarity (SMS), which relies only on sentence embeddings, and sentence and word mover’s similarity (S+WMS), which uses word and sentence embeddings, as in Figure 1. In §4, we find that sentence mover’s similarity metrics significantly improve correlation with human evaluations over ROUGE-L (the longest common subsequence variant of ROUGE) and WMD when scoring automatically generated summaries (averaging 3.4 sentences). We also automatically evaluate human-authored essays (averaging 7.5 sentences) and find smaller but significant gains. We compute sentence mover’s similarity metrics with type-based embeddings and contextual embeddings and find these results hold regardless of embedding type, with no significant difference caused by the choice of embedding. Finally, we show in §5 that sentence mover’s similarity metrics can also be used when learning to generate text. Generating summaries using reinforcement learning with sentence mover’s similarity as the reward results in higher quality summaries than those generated using a ROUGE-L or WMD reward, according to both automatic metrics and human evaluations. 2 Background: Word Mover’s Distance Earth mover’s distance (EMD, also known as the Wasserstein metric; Rubner and Guibas, 1998) is a measure of the distance between two probability distributions. Word mover’s distance (WMD; Kusner et al., 2015) is a discrete version of EMD that evaluates the distance between two sequences (e.g., sentences, paragraphs, etc.), each represented with relative word frequencies. It combines (1) item similarity2 on bag-of-word (BOW) histogram representations of text (Goldberg et al., 2018) with (2) word embedding similarity. For any two documents A and B, WMD is defined as the minimum cost of transforming one document into the other. Each document is represented by the relative frequencies of words it contains, i.e., for the ith word type, dA,i = count(i)/|A| (1) where |A| is the total word count of document A, and dB,i is defined similarly. Now let the ith word be represented by vi ∈ Rm, i.e., an m-length embedding,3 allowing us to define distances between the ith and jth words, denoted ∆(i, j). V is the vocabulary size. We follow Kusner et al. (2015) and use the Euclidean distance ∆(i, j) = ∥vi −vj∥2. The WMD is then the solution to the linear program: WMD(A, B) = min T≥0 PV i=1 PV j=1Ti,j∆(i, j) (2a) s.t. ∀i, PV j=1Ti,j = dA,i, (2b) 2The similarity can be defined as cosine, Jaccard, Euclidean, etc. 3Our evaluation scores depend on pretrained word embeddings, which can be type-based or contextual. Our experiments consider both; see §4 and §5. When using contextual embeddings, we treat each token as its own type, as each word will have a different embedding depending on its context. ∀j, PV i=1Ti,j = dB,j (2c) T ∈RV ×V is a nonnegative matrix, where each Ti,j denotes how much of word i (across all its tokens) in A is assigned to tokens of word j in B, and the constraints ensure the flow of a given word cannot exceed its weight. Specifically, WMD ensures that the entire outgoing flow from word i equals dA,i, i.e., P j Ti,j = dA,i. Additionally, the amount of incoming flow to word j must match dB,j, i.e., P i Ti,j = dB,j. Following the example of Kilickaya et al. (2017), we transform WMD into a similarity (WMS): WMS(A, B) = exp(−WMD(A, B)) (3) WMS measures two documents’ similarity by minimizing the total distance to move words between two documents, combining the strengths of BOW and word embedding-based similarity metrics. In Figure 1, WMS would calculate the cost of moving from Document A to Document B using only the word embeddings, denoted in orange. WMS is symmetric, and WMS(A, A) = 1 when word embeddings are deterministic. Empirically, WMD has improved the performance of NLP tasks (see §6), specifically sentence-level tasks, such as image caption generation (Kilickaya et al., 2017) and natural language inference (Sulea, 2017). However, its cost grows prohibitively as the length of the documents increases, and the BOW approach can be problematic when documents become large as the relation between sentences is lost. By only measuring word distances, the metric cannot capture information conveyed by the grouping of words, for which we need higher-level document representations (Dai et al., 2015; Wu et al., 2018). 3 Sentence Mover’s Similarity Metrics We modify WMS to measure the similarity between two documents using sentence embeddings, which we call a sentence mover’s similarity approach. We introduce two new metrics: Sentence Mover’s Similarity (SMS) and Sentence and Word Mover’s Similarity (S+WMS). SMS replaces the word embeddings in WMS with sentence embeddings (§3.1), while S+WMS combines the two metrics and uses both word and sentence embeddings (§3.2). Our code (an extension of an existing WMD implementation4) and datasets are publicly available.5 3.1 Sentence Mover’s Similarity Sentence Mover’s Similarity (SMS) performs the same linear optimization problem in Eq. 2a as WMS, except now each document is represented as a bag of sentence embeddings rather than a bag of word embeddings. In Figure 1, SMS considers only the sentence embeddings, denoted in blue. To get the representation of a sentence in a document, we combine the sentence’s word embeddings. Sentence representations based on averaging or pooling word embeddings perform competitively on tasks including sentence classification, recognizing textual entailment, and paraphrase detection (Conneau and Kiela, 2018). We use sentence representations that are the average of their word embeddings, as this approach outperformed pooling methods in preliminary results. While in WMS word embeddings are weighted according to their frequency in the document (see Eq. 1), SMS weights each sentence embedding by the number of words (|A|) it contains.6 So a sentence i in document A will receive a weight of: dA,i = |i|/|A| (4) We solve the same linear program, Eq. 1, by calculating the cumulative distance of moving a document’s sentences to match another document. Now the vocabulary is the set of sentences in the documents instead of the words, as in Figure 2. 3.2 Sentence and Word Mover’s Similarity Sentence and Word Mover’s Similarity (S+WMS) combines WMS and SMS and represents each document as a collection of both words and sentences. Each document is now a bag of both word and sentence embeddings (as seen in Figure 1), where each word embedding is weighted according to its frequency and each sentence embedding is weighted according to its length. Now the bag of words and sentences representing document A is normalized by 2|A|, so that: 4https://github.com/src-d/wmd-relax 5https://github.com/eaclark07/sms 6Preliminary results showed count-based sentence weightings performed better than uniform weightings. Other weighting options, such as frequency-based weighting as done in BERTScore (Zhang et al., 2019), are a direction for extending this work. Words Sentences Figure 2: The S+WMS T matrix for documents A and B from Figure 1 (with empty rows/columns removed). Contrarily, WMS’s T matrix only maps between words and has the dimensions of the dashed region labeled “Words,” and SMS’s maps between sentences in the shape of the dashed region “Sentences.” Best viewed in color. dA,i = ( count(i)/2|A|, if i is a word |i|/2|A|, if i is a sentence (5) As in WMS and SMS, the same linear program in Eq. 1 is solved, this time calculating the cumulative distance of moving both a document’s words and sentences to match another document. The vocabulary is the set of sentences and words in the documents (see Figure 2). The sentence embeddings are treated the same as word embeddings in the optimization; the only difference is their length-based weights. This means a sentence embedding can be mapped to a word embedding (e.g., “They have fun.” maps to “play” in Figure 1) or vice versa. It also means that a sentence’s words do not have to move to the same word or sentence embedding(s) that their sentence moves to (as seen in Figure 1); a sentence in document A could be transported to an embedding in document B and have none of its words moved to the same embedding. More constraints could be introduced to further control the flow between documents, which we leave to future work. 4 Intrinsic Evaluation To test the performance of the SMS and S+WMS metrics, we first examine their usefulness as evaluation metrics. (In §5, we evaluate their performance as cost functions for an extrinsic task, abstractive summarization.) We measure the correlations between the scores assigned to texts by various automatic metrics (ROUGE-L, WMS, SMS, S+WMS) and the scores assigned by human judges. We are interested in multi-sentence texts, both machine- and humangenerated. Therefore, we consider subsets of two corpora that have been judged by humans: a collection of automatically generated summaries of articles in the CNN/Daily Mail news dataset (alongside reference summaries; see Section 4.1; Chaganty et al., 2018; Hermann et al., 2015; Nallapati et al., 2016) and student essays from the Hewlett Foundation’s Automated Student Assessment Prize (Section 4.2).7 Statistics describing the datasets are in A.1. Because the word and sentence mover’s similarity metrics are based on pretrained representations, we explore the effect of varying the word embedding method. We present results for two different types of word embeddings: GloVe embeddings (Pennington et al., 2014) and ELMo embeddings8 (Peters et al., 2018; Gardner et al., 2018). We obtain GloVe embeddings, which are type-based, 300-dimensional embeddings trained on Common Crawl,9 using spaCy,10 while the ELMo embeddings are character-based, 1,024-dimensional, contextual embeddings trained on the 1B Word Benchmark (Chelba et al., 2013). We use ELMo to embed each sentence, which produces three vectors for each word, one from each layer of the model. We average the vectors to get a single embedding for each word in the sentence. All correlations are Spearman correlations (Elliott and Keller, 2014; Kilickaya et al., 2017), and significance in the improvement between two metrics’ correlations with human judgment is calculated using the Williams (1959) significance test.11 4.1 Summaries Dataset Evaluation To understand how the sentence mover’s similarity metrics evaluate automatically generated text, we use the subset of the CNN/Daily Mail dataset for which Chaganty et al. (2018) collected human annotations. Annotators evaluated summaries (generated with four different neural models) on a scale 7https://www.kaggle.com/c/asap-eas 8https://allennlp.org/elmo 9http://commoncrawl.org/the-data/ 10https://spacy.io/models/en#en_core_ web_md 11https://github.com/ygraham/ nlp-williams Summaries Essays ROUGE-L 0.117 0.441 GloVe ELMo GloVe ELMo WMS **0.180 **0.160 0.429 0.443 SMS **0.258 **0.253 0.457 0.451 S+WMS **0.214 **0.204 *0.488 *0.490 Table 2: Spearman correlation of metrics with human evaluations. Asterisks indicate significant improvement over ROUGE-L, with (*) for p < 0.05 and (**) for p < 0.01. from –1 to 1. We consider the subset of summaries scored by two or more judges, taking the average to be the summary’s score. The automatic evaluation metrics score each generated summary’s similarity to the human-authored reference summary from the CNN/Daily Mail dataset. Table 2 shows each metric’s correlation with the human judgments. SMS correlates best with human judgments, and both sentence-based metrics outperform ROUGE-L and WMS. We find that the difference between GloVe and ELMo’s scores is not significant.12 Discussion Two examples of generated summaries and their scores are shown in Table 3. Because the scores cannot be directly compared between metrics, we distinguish scores that are in the top quartile for their metric (i.e., the highest rated) and in the bottom quartile (i.e., the lowest rated). The first example in Table 3 is highly rated by metrics using word and sentence embeddings, but judged to be a poor summary by ROUGE-L because information is reworded and reordered from the reference. For example, the phrase “asked for medical help” is worded as “sought medical attention” in the hypothesis summary. Nevertheless, exact word matching can be important for ensuring factual correctness. While the generated hypothesis summary states “six officers have been suspended with pay”, the reference states they were actually “suspended without pay.” The second example, which was generated with a seq2seq model, was one of the best summaries according to ROUGE-L but one of the worst according to SMS and S+WMS. It also received low human judgments, most likely due to its nonsensical repetitions. While the short, repeated phrases like “three different flavours” match the reference summary well enough to score well with ROUGE12Williams test: p = 0.35 (SMS) and p = 0.16 (S+WMS) L, the overall sentence representations are distant from those in the reference summary, resulting in low SMS and S+WMS scores. 4.2 Essays Dataset Evaluation To test the metrics on human-authored text, we use a dataset of graded student essays that consists of responses to standardized test questions for tenth graders. We use a subset of Question #3 from the exam, which asks the test-taker to synthesize information from a reading passage, where student responses contain 5–15 sentences. Graders assigned the student-authored responses with scores ranging from 0 to 3. For the reference essay, we use a top-scoring sample essay, which the graders had access to as a reference while assigning scores. The full reference essay is in A.2. Table 2 shows the correlation of each metric with the evaluators’ scores. As in the summarization task, SMS outperforms both ROUGE-L and WMS. However, in this case, having the sentence representations in the metric gives the best result, with S+WMS correlating best with human scores, significantly better than ROUGE-L. This is consistent across embedding type; once again, the choice of embedding does not create a significant difference between the sentence mover’s metrics.13 Discussion Aside from the length of the text, the Essays dataset presents the metrics with several challenges not found in the Summaries dataset. For example, the dataset contains a large number of spelling mistakes, due to both author misspellings and errors in the transcription process. One essay begins, “The setting of the story had effected the cycle’s becuse if it was sub earbs he could have stoped any where and got water ...” The tone and style of the essay can also vary from the reference essay. (For example, the author of Sample #3 in A.2 ends their essay by reflecting on how they would respond in the protagonist’s place.) Embedding-based metrics may be more forgiving to deviations in writing style from the reference essay, such as the use of first person. While Table 2 indicates sentence mover’s similarity metrics significantly improve correlation with human judgments over standard methods, there is still enough disagreement that we believe automatic metrics should not replace human evaluations. Rather, they should complement human evaluations as an automatic proxy that can be used 13Williams test: p = 0.33 (SMS) and p = 0.46 (S+WMS) Samples Summaries Metric Score Sample #1 Reference. Freddie Gray, who is black, asked for medical help but was denied during 00-minute police car ride, eventually paramedics were called. Deputy police commissioner Kevin Davis conceded their failure. But chief commissioner refuses to resign over the death. Six officers are suspended without pay during an investigation. Human ROUGE-L 0.00 12.44 Hypothesis. Baltimore Police Commissioner Anthony Batts ruled out his resignation despite that fact that his deputy admitted they should have sought medical attention for Freddie Gray. Six officers have been suspended with pay as local police and federal authorities investigate. Commissioner Anthony Batts has ruled out the possibility of his resignation. WMS SMS S+WMS 21.41 128.91 47.89 Sample #2 Reference. Choc on Choc’s chocolates come in three different flavours. The face of each politician is emblazoned on milk Belgium chocolate bars. Cameron’s has blueberries, Clegg is honeycomb and Miliband is raspberry. Human ROUGE-L -0.5 34.57 Hypothesis. UNK lollies on 273 invalid chocolates come in three different flavours. Contains three different flavours - the colours associated with each leader. David Cameron, Nick Clegg, Nick Clegg, Nick Clegg and David Cameron. WMS SMS S+WMS 5.08 51.39 12.25 Table 3: Two examples from the Summaries dataset along with the scores they received (using GloVe) comparing reference (human summary) to hypothesis (model generated summary). Scores that are in the top quartile for a given metric are in green and bold. Scores in the bottom quartile are in red and italics. Human scores range from –1 to 1. Please see A.2 for details. human rouge wms sms s+wms human rouge wms sms s+wms 0.12 0.18 0.7 0.26 0.52 0.73 0.21 0.68 0.98 0.85 0.0 0.2 0.4 0.6 0.8 1.0 (a) Summaries with GloVe embeddings human rouge wms sms s+wms human rouge wms sms s+wms 0.12 0.16 0.77 0.25 0.6 0.79 0.2 0.74 0.97 0.91 0.0 0.2 0.4 0.6 0.8 1.0 (b) Summaries with ELMo embeddings human rouge wms sms s+wms human rouge wms sms s+wms 0.44 0.43 0.5 0.46 0.47 0.61 0.49 0.54 0.94 0.84 0.0 0.2 0.4 0.6 0.8 1.0 (c) Essays with GloVe embeddings human rouge wms sms s+wms human rouge wms sms s+wms 0.44 0.44 0.59 0.45 0.5 0.66 0.49 0.6 0.91 0.91 0.0 0.2 0.4 0.6 0.8 1.0 (d) Essays with ELMo embeddings Figure 3: Spearman correlation with each metric and human evaluations using GloVe and ELMo embeddings on the Summaries and Essays datasets. (Best viewed in color.) for intermediate evaluation and as a reward signal when learning, as we show in §5. 5 Extrinsic Evaluation In addition to automatically evaluating text, we can also use sentence mover’s metrics as rewards while learning text generation models. To demonstrate this, we train an encoder-decoder model on the CNN/Daily Mail dataset to generate summaries using reinforcement learning (RL). Instead of maximizing likelihood, policy gradient RL methods can directly optimize discrete target evaluation metrics that are non-differentiable, such as ROUGE (Paulus et al., 2018; Jaques et al., 2017; Pasunuru and Bansal, 2017; Wu et al., 2016; Celikyilmaz et al., 2018; Edunov et al., 2018). Here, we learn policies to maximize WMS/SMS/S+WMS metrics, guiding the model to learn semantic similarities, while policies trained using ROUGE rely only on word n-gram matches between generated and ground-truth text. Model We encode the input document using 2layered bidirectional LSTM networks and a 2layered LSTM network for the decoder. We use the attention mechanism (Bahdanau et al., 2015; See et al., 2017) to force the decoder model to learn to focus (i.e., attend) on specific parts of the input sequence when decoding, instead of relying only on the hidden vector of the decoder’s LSTM. We also include pointer networks (See et al., 2017; Cheng and Lapata, 2016), which point to elements of the input sequence at each decoding step. To train our policy-based generator, we use a mixed training objective that jointly optimizes multiple losses, which we describe below. MLE Our baseline model uses maximum likelihood training for sequence generation. Given y∗={y∗ 1,y∗ 2,...,y∗ T } as the ground-truth summary for a given input document d, we compute the loss as: LMLE = −PN T=1 logp(y∗ t | y∗ 1 . . . y∗ t−1, d) (6) by taking the negative log-likelihood of the target word sequence. Model Loss w/ Reward Metric ROUGE-1 ROUGE-2 ROUGE-L WMS SMS S+WMS MLE+Pgen [1] (no reward) 36.44 15.66 33.42 MLE+Pgen+RL Mixed w/ ROUGE-L [2] 38.01 16.43 35.49 MLE+Pgen+RL+Intra-Attn Mixed w/ ROUGE-L [3] 39.87 15.82 36.90 MLE+Pgen (no reward) (re-trained baseline) 36.95 15.56 34.00 13.02 90.05 32.15 MLE+Pgen+RL Mixed w/ ROUGE-L 37.46 16.10 34.39 13.07 86.48 31.87 MLE+Pgen+RL Mixed w/ WMS 38.17 16.52 34.97 14.52 95.68 34.77 MLE+Pgen+RL Mixed w/ SMS 38.52 16.52 35.33 15.15 96.65 35.50 MLE+Pgen+RL Mixed w/ S+WMS 37.20 15.67 34.15 13.32 91.09 32.64 Table 4: Evaluation on summarization task when various metrics are used as rewards during learning. Columns show average score of each model’s generated summaries according to various metrics. Previously reported results (upper block): [1] MLE training with pointer networks (Pgen) (See et al., 2017) ; [2] Mixed MLE and RL training with Pgen (Celikyilmaz et al., 2018), [3] Mixed MLE and RL training with Pgen and intra-decoder attention (Paulus et al., 2018). The lower block reports re-trained baselines and our models with new metrics. Bold indicates best among the lower block. Reinforcement Learning (RL) Loss The decoder generates the summary sequence ˆy, which is then compared against the ground truth sequence y∗to compute the reward r(ˆy). Our model learns using a self-critical training approach (Rennie et al., 2016), by exploring new sequences and comparing them against the best greedily decoded sequence. For each training example d, we generate two output sequences: ˆy, which is sampled from the probability distribution at each time step, p(ˆyt | ˆy1 . . . ˆyt−1, d), and ˜y, the baseline output, which is greedily generated by argmax decoding from p(˜yt | ˜y1 . . . ˜yt−1, d). Our mixed training objective is then to minimize: LRL = (r(˜y)−r(ˆy)) PT t=1 logp(ˆyt | ˆy1 . . . ˆyt−1, d) (7) It ensures that, with better exploration, the model learns to generate sequences ˆy that receive higher rewards than the baseline ˜y, increasing the overall reward expectation of the model. Mixed Loss While training with only MLE loss will learn a better language model, it may not guarantee better results on discrete performance measures such as WMS and SMS. Similarly, optimizing with only RL loss using SMS as a reward may increase the reward gathered at the expense of diminished readability and fluency of the generated summary. A combination of the two objectives can yield improved task specific scores while maintaining a good language model: LMIXED = γLRL + (1 −γ)LMLE (8) where γ is a hyperparameter balancing the two objective functions. We pre-train models with MLE loss, and then continue with the mixed loss. We train four different models on the CNN/Daily Mail dataset using mixed loss (MLE+RL) with ROUGE-L, WMS, SMS, and S+WMS as the reward functions. Training details are in A.3 and A.4. 5.1 Generated Summary Evaluation We evaluate the generated summaries from each model with ROUGE-L, WMS, SMS, and S+WMS in Table 4. While we include previously reported numbers, we re-trained the mixed loss models using ROUGE-L and use those as our baseline, as previously trained models should be heavily optimized and use more complex networks than ours. For fair comparison, we kept the encoder-decoder network type, structure, hyperparameters, and initialization the same for each model, changing only the reward. We pre-trained an MLE model (“MLE+Pgen (no reward) (re-trained baseline)” in Table 4) and used it to initialize the mixed loss models with different reward functions. Across all metrics, the models trained using WMS and SMS metrics as the reward outperform models trained with ROUGE-L as the reward function. S+WMS models lag behind ROUGE-L. The SMS model outperforms all other models across all metrics on the abstractive summarization task, consistent with SMS’s performance at evaluating summaries in §4.1. Table 5 shows summaries generated from each of the mixed loss models. 5.2 Human Evaluation We collected human evaluations for 100 summaries generated by the mixed loss models to compare ROUGE-L as a reward to WMS, SMS, and S+WMS. Amazon Mechanical Turkers chose between two generated summaries, one from the ROUGE-L model and one from WMS, SMS, or Human Summary the 69 - year - old collaborated with nbc ’s today show to launch a contest for an elvis - obsessed couple to win the ‘ ultimate wedding ’ . the winning duo will get married in the brand new elvis presley ’s graceland wedding chapel at the westgate hotel on thursday , april 23 . while she agreed to make an appearance , the woman who wed elvis in 1967 made one thing clear before unveiling the latest wedding chapel to bear his name : no impersonators . Model Generated Summary ROUGE-L priscilla presley will serve as a witness at the first wedding to be held at an all - new chapel of love in las vegas . the 69 - year - old collaborated with nbc ’s today show to launch a contest for one elvis obsessed couple to win the ‘ ultimate wedding ’ . elvis performed more than 830 sold - out shows . WMS the 69 - year - old collaborated with nbc ’s today show to launch a contest for one elvis - obsessed couple to win the ‘ ultimate wedding ’ . the winning duo – announced next monday – will tie the knot at elvis presley ’s graceland wedding chapel inside the westgate hotel on thursday , april 23 . SMS priscilla presley will tie the knot at elvis presley ’s graceland wedding chapel inside the westgate hotel on thursday , april 23 . the 69 - year - old collaborated with nbc ’s today show to launch a contest for one elvis - obsessed couple to win the ‘ ultimate wedding ’ . S+WMS priscilla presley will serve as a witness at the first wedding to be held at an all - new chapel of love in las vegas . the 69 - year - old collaborated with nbc ’s today show to launch a contest for one elvis obsessed couple to win the ‘ ultimate wedding ’ . Table 5: Summaries generated from the mixed MLE+RL loss models with ROUGE-L, WMS, S+WMS, and SMS metrics as rewards, along with the corresponding human-authored reference summary. ROUGE-L vs. WMS ROUGE-L vs. SMS ROUGE-L vs. S+WMS Criteria ROUGE-L WMS = % ↑ ROUGE-L SMS = % ↑ ROUGE-L S+WMS = % ↑ non-redundancy 76 122 102 61% 64 144 92 69% 66 132 102 66% coherence 102 158 40 60% 83 170 47 67% 83 166 51 66% focus 99 161 40 61% 79 174 47 68% 84 166 50 66% overall 108 160 32 59% 85 179 36 67% 84 179 37 68% Table 6: Human evaluations on a random subset of 100 summaries. The frequencies from the head-to-head comparison of models trained with ROUGE-L against WMS/SMS/S+WMS are shown. Each summary is evaluated by 3 judges (300 summaries per criteria). ‘=’ indicates no difference. All improvements are statistically significance at p < 0.001. S+WMS. They selected one of the two summaries based on: (1) non-redundancy, fewer repeated ideas, (2) coherence, clearly expressed ideas, (3) focus, ideas free of superfluous details, and (4) overall, the summary effectively communicates the article’s content. These criteria help evaluate the impact of the metrics used as reward. (Task details are in A.5.) Results We asked human judges to evaluate the output of the mixed loss model trained with a ROUGE-L reward versus models trained with WMS, SMS, and S+WMS the reward. The results are shown in Table 6. Human judges significantly prefer summaries produced by models optimized with WMS, SMS, and S+WMS over ROUGE-L. SMS and S+WMS were preferred over ROUGE-L more often than WMS was. There is no significant difference between the evaluations of SMS and S+WMS. Among all other metrics, SMS was rated the highest on the non-redundancy question (69% improvement over the ROUGE-L score), indicating that the model learns to generate summaries that contain less repetition between sentences. While the SMS model’s output was highlyscored by both the automatic and human evaluations, removing word-level scoring does come with a cost, as seen in the example in Table 5. The SMS summary contains a mistake, stating that “priscilla will tie the knot” instead of “serve as a witness”. This issue may be mitigated by a better encoder for the summarization task and better sentence and word representations. As future work, we will investigate summarization models with more complex sentence embeddings and encoder structures (e.g., self-attention models). 6 Related Work Evaluation has been among the most discussed topics of the natural language generation (NLG) research area (Lapata and Barzilay, 2005; Belz and Reiter, 2006; Reiter and Belz, 2006; Barzilay and Lapata, 2008; Reiter and Belz, 2009; Reiter, 2011; Novikova et al., 2017). There are three main ways to evaluate NLG methods: (1) automatic metrics to compare NLG texts against reference texts, (2) task-based (extrinsic) evaluation to measure the impact of a NLG system on a downstream task, and (3) human evaluations, which ask people to rate generated texts. In this work we introduce new automatic evaluation metrics for long text generation and evaluation. Automatic evaluation metrics compare generated text against reference texts using word overlap metrics such as: BLEU (Papineni et al., 2002); ROUGE (Lin, 2004); NIST (Doddington, 2002), a version of BLEU; METEOR (Lavie and Agarwal, 2007), unigram precision and recall; CIDER (Vedantam et al., 2015), the average n-gram cosine similarity; cosine similarity between the average word embedding; and WMD, which calculates the word embedding-based “travel cost”. Though all have strengths and weaknesses, ROUGE metrics (particularly ROUGE-L) are common for multisentence text evaluations. Textual metrics that consider specific qualities in the system outputs, like complexity and diversity, are also used to evaluate NLG systems (Dusek et al., 2019; Hashimoto et al., 2019; Sagarkar et al., 2018; Purdy et al., 2018). Word mover’s distance has recently been used for NLP tasks like learning word embeddings (Zhang et al., 2017; Wu et al., 2018), textual entailment (Sulea, 2017), document similarity and classification (Kusner et al., 2015; Huang et al., 2016; Atasu et al., 2017), image captioning (Kilickaya et al., 2017), document retrieval (Balikas et al., 2018), clustering for semantic word-rank (Zhang and Wang, 2018), and as additional loss for text generation that measures the optimal transport between the generated hypothesis and reference text (Chen et al., 2019). We investigate WMD for multi-sentence text evaluation and generation and introduce sentence embedding-based metrics. 7 Conclusion We present SMS and S+WMS, sentence mover’s similarity metrics for automatically evaluating multi-sentence texts. We find including sentence embeddings in automatic metrics significantly improves scores’ correlation with human judgments, both on automatically generated and human-authored texts. The metrics’ gain over ROUGE-L is consistent across word embedding types; there is no significant difference between type-based and contextual embeddings. Moreover, we find these metrics can be used to generate text; summaries generated with SMS as a reward are of better quality than ones generated with ROUGEL, according to both automatic and human evaluations. Acknowledgments This research was supported in part by Microsoft Research, a NSF graduate research fellowship, and the DARPA CwC program through ARO (W911NF-15-1-0543). The authors also thank Antoine Bosselut, Dinghan Shen, and Shuai Tang for their feedback, the anonymous reviewers for their useful comments, and the participants who took part in our study. References Mohammed Alshahrani, Spyridon Samothrakis, and Maria Fasli. 2017. Word mover’s distance for affect detection. 2017 International Conference on the Frontiers and Advances in Data Science. Kubilay Atasu, Thomas P. Parnell, Celestine D¨unner, Manolis Sifalakis, Haralampos Pozidis, Vasileios Vasileiadis, Michail Vlachos, Cesar Berrospi, and Abdel Labbi. 2017. Linear-complexity relaxed word mover’s distance with GPU acceleration. In IEEE International Conference on Big Data. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Georgios Balikas, Charlotte Laclau, Ievgen Redko, and Massih-Reza Amini. 2018. Cross-lingual document retrieval using regularized Wasserstein distance. CoRR, abs/1805.04437. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In IEEvaluation@ACL. Regina Barzilay and Mirala Lapata. 2008. Modeling local coherence: An entity-based approach. In Computational Linguistics. Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In EACL. Asli Celikyilmaz, Antoine Bosselut, Xiadong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In NAACL. Arun Tejasvi Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evaluation. In ACL. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2013. One billion word benchmark for measuring progress in statistical language modeling. In INTERSPEECH. Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Improving sequence-to-sequence learning via optimal transport. In ICLR. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In ACL. Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In LREC. Andrew M. Dai, Christopher Olah, and Quoc V. Le. 2015. Document embedding with paragraph vectors. In NeurIPS Deep Learning Workshop. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Second International Conference on Human Language Technology Research. Ondrej Dusek, Jekaterina Novikova, and Verena Rieser. 2019. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG challenge. In Computational Linguistics. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In NAACL-HLT. Desmond Elliott and Frank Keller. 2014. Comparing automatic evaluation measures for image description. In ACL. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In ACL workshop for NLP Open Source Software. Yoav Goldberg, Graeme Hirst, Yang Liu, and Meng Zhang. 2018. Neural network methods for natural language processing. Computational Linguistics, 44(1). Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In NAACL. Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NeurIPS. Gao Huang, Chuan Guo, Matt J. Kusner, Yu Sun, Fei Sha, and Kilian Q. Weinberger. 2016. Supervised word mover’s distance. In NeurIPS. Natasha Jaques, Shixiang Gu, Dxmitry Bahdanau, Jose Miguel Hernandez-Lobato, Richard E. Turner, and Douglas Eck. 2017. Counterfactual multi-agent policy gradients. In ICML. Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, and Erkut Erdem. 2017. Re-evaluating automatic metrics for image captioning. In EACL. Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. ICLR, 37. Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and representations. In IJCAI. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of with human judgments. In Second Workshop on Statistical Machine Translation. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In ACL. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In LREC. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In AAAI. Ramesh Nallapati, Bowen Zhou, C´ıcero Nogueira dos Santos, Caglar G¨ulehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence RNNs and beyond. In CoNLL. Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In EMNLP. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Ramakanth Pasunuru and Mohit Bansal. 2017. Reinforced video captioning with entailment rewards. In EMNLP. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In ICLR. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL. Christopher Purdy, Xinyu Wang, Larry He, and Mark O. Riedl. 2018. Predicting generated story quality with quantitative measures. In AIIDE. MarcAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. In ICLR. Ehud Reiter. 2011. Task-based evaluation of NLG systems: Control vs real-world context. In UCNLG+Eval. Ehud Reiter and Anja Belz. 2006. GENEVAL: A proposal for shared-task evaluation in NLG. In INLG. Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. In CVPR. Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2016. Self-critical sequence training for image captioning. In CVPR. Tomasi C. Rubner, Y. and L. J. Guibas. 1998. A metric for distributions with applications to image databases. In IEEE. Manasvi Sagarkar, John Wieting, Lifu Tu, and Kevin Gimpel. 2018. Quality signals in generated stories. In *SEM 2018. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In ACL. Octavia-Maria Sulea. 2017. Recognizing textual entailment in Twitter using word embeddings. In 2nd Workshop on Evaluating Vector-Space Representations for NLP. Tsegaye Misikir Tashu and Tom´as Horv´ath. 2018. Pair-Wise: Automatic essay evaluation using word mover’s distance. In CSEDU. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image description evaluation. In CVPR. Evan J. Williams. 1959. Regression Analysis, volume 14. Wiley. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. In Machine Learning. Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, and Michael J. Witbrock. 2018. Word movers embedding: From word2vec to document embedding. In EMNLP. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. ArXiv:1609.08144. Hao Zhang and Jie Wang. 2018. Semantic WordRank: Generating finer single-document summarizations. In IDEAL. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In EMNLP. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. CoRR, abs/1904.09675. A Appendix A.1 Datasets Summaries and Essays: For the intrinsic tasks in §4, we use two types of human-evaluated texts: machine-generated summaries and humanauthored essays. We follow Kusner et al. (2015) and remove punctuation and stopwords. (For contextual embeddings, these are removed after the embeddings are obtained.) The details of the subsets we used are in Table 7. Summaries Essays # documents 2,085 1,088 # tokens 255,609 164,776 # types 12,882 6,381 average length (tokens) 65 151 average length (sent.) 3.4 7.5 Table 7: Corpora statistics. CNN/Daily Mail: CNN/Daily Mail dataset (Nallapati et al., 2017; Hermann et al., 2015) is a collection of online news articles along with multi-sentence summaries. We use the same data splits as in Nallapati et al. (2017). Earlier work anonymized entities by replacing each named entity with a unique identifier (e.g., Dominican Republic→entity15). In this work we used the non-anonymized version. Stats CNN/DM Avg. # tokens document 781 Avg. # tokens summary 56 Total # train doc-summ. pair 287,229 Total # validation doc-summ. pair 13,368 Total # test doc-summ. pair 11,490 Input token length 400/800 Output token length 100 Table 8: Summary statistics of CNN/Daily Mail (CNN/DM) Datasets. A.2 More Examples In Table 9, we show samples of the summaries that we used to perform intrinsic evaluations in the main text. A.3 Extrinsic Model Training Details We use 128 dimensional bidirectional 2-layered LSTMs for the encoder and 128 unidirectional LSTMs for the decoder. For both datasets, we limit the input and output vocabulary size to the 30,000 most frequent tokens in the training set. We initialize word embeddings with FastText14 (Mikolov et al., 2018) 300-dimensional vectors and finetune them during training. For WMS, SMS and S+WMS embeddings, we use the GloVe word embeddings described in §4. We train using Adam with a learning rate of 0.001 for the MLE models and 10−5 for the MLE+RL models. We select the MLE models with the lowest cross-entropy loss and the MLE+RL models with the highest reward on a sample of validation data to evaluate on the test set. At test time, we use beam search of width 5 on all our models to generate final predictions. For the Mixed RL trained models, we initialize the weights with pre-trained MLE model, and we start with γ = 0.97 and gradually increase its value. We train our models for ∼25 epochs which took 1–2 days on an NVIDIA V100 GPU machine. A.4 Policy Gradient Reinforce Training Maximum likelihood-based training of sequence generation models poses exposure bias issues since the model is evaluated by comparing the model to empirical distribution, whereas at test time we use automatic metrics to evaluate the model generated text (Ranzato et al., 2015). Reinforced based policy gradient approach is used to address this issue by learning to optimize discrete target evaluation metrics that are nondifferentiable. We use REINFORCE (Williams, 1992) to learn a policy pθ defined by the model parameters θ to predict the next action (word). The RL loss function is defined as: LRL = Eˆy∼pθ[r(ˆy)] (9) where ˆy is the sequence of sampled words. The derivative of the the objective function based on Monte Carlo sampling yields: ▽θLRL = −(r(ˆy) −b) ▽θ log pθ(ˆy) (10) The baseline b is a bias estimator and is used for variance reduction in RL training. In this work we use self-critical training and use the reward obtained from a sequence that is generated by greedily decoding, ˜y, as a baseline: ▽θLRL = −(r(ˆy) −r(˜y)) ▽θ log pθ(ˆy) (11) A.5 Human Evaluations Evaluation Procedure We randomly selected 100 samples from the CNN/Daily Mail test set 14https://fasttext.cc/docs/en/ english-vectors.html Samples Summaries Sample #1 Reference. Freddie Gray, who is black, asked for medical help but was denied during 00-minute police car ride, eventually paramedics were called. Deputy police commissioner Kevin Davis conceded their failure. But chief commissioner refuses to resign over the death. Six officers are suspended without pay during an investigation. Hypothesis. Baltimore Police Commissioner Anthony Batts ruled out his resignation despite that fact that his deputy admitted they should have sought medical attention for Freddie Gray. Six officers have been suspended with pay as local police and federal authorities investigate. Commissioner Anthony Batts has ruled out the possibility of his resignation. Sample #2 Reference. Choc on Choc’s chocolates come in three different flavours. The face of each politician is emblazoned on milk Belgium chocolate bars. Cameron’s has blueberries, Clegg is honeycomb and Miliband is raspberry. Hypothesis. UNK lollies on 273 invalid chocolates come in three different flavours. Contains three different flavours - the colours associated with each leader. David Cameron, Nick Clegg, Nick Clegg, Nick Clegg and David Cameron. Sample #3 Reference Essay. The setting seems to be as formidable an opponent as the actual workout. It seems as if everything is against the cyclist, including nature. As the day progresses, and the cyclist’s journey continues, the setting becomes harsher and harsher. After passing the first “town”, the “sun was beginning to beat down.” In need of water, all a cruel pump gives him is “a tarlike substance.” His sufferings continue, increasingly pummeled by his surroundings and his thirst for water. If dehydration was not enough, the flat terrain gave way to “rolling hills”, which would only punish his legs more. Reaching possible salvation, his hopes are crushed when the “Welch’s Grape Juice Factory” turns out to be abandoned. All these events are enough to destroy anyone’s spirit. The cyclist almost gives up hope to accept certain death. He has become ferociously beaten by his very surroundings. It appears as if he is fated to die alone in the blistering heat. Although he hangs his head in despair, he still continues on the path of disappointment. In a twist of fate, he encounters a thriving store where he halts and drinks. Finally encountering his salvation, this particular setting brings new hope and relief to the cyclist who has finally survives his trek through nature. Hypothesis. The features of the setting affect the cyclist alot. The hot sun beating down on him makes him sweat and makes him thirsty. The bumpy roods and hills make him work harder. The abandoned places make him lose hope. If faced with these obstacles I would have been affected in the same way. As I believe any human would be. Table 9: Examples of human generated and model generated summaries from Summaries and Essays datasets and use workers from Amazon Mechanical Turk as judges to evaluate them on the four criteria (redundancy, focus, coherence, and overall). Following DUC (Document Understanding Conferences) style evaluations (https://duc. nist.gov/), we performed a head-to-head evaluation and randomly showed Turkers two modelgenerated summaries. We asked the human annotators to rate each summary on the same metrics as before without seeing the source document or ground truth summaries.
2019
264
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2761–2772 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2761 Analysis of Automatic Annotation Suggestions for Hard Discourse-Level Tasks in Expert Domains Claudia Schulz1, Christian M. Meyer1, Jan Kiesewetter2, Michael Sailer3, Elisabeth Bauer3, Martin R. Fischer2, Frank Fischer3, and Iryna Gurevych1 1 Ubiquitous Knowledge Processing (UKP) Lab, Technische Universit¨at Darmstadt, Germany 2 Institute of Medical Education, University Hospital, LMU M¨unchen, Germany 3 Chair of Education and Educational Psychology, LMU M¨unchen, Germany http://famulus-project.de Abstract Many complex discourse-level tasks can aid domain experts in their work but require costly expert annotations for data creation. To speed up and ease annotations, we investigate the viability of automatically generated annotation suggestions for such tasks. As an example, we choose a task that is particularly hard for both humans and machines: the segmentation and classification of epistemic activities in diagnostic reasoning texts. We create and publish a new dataset covering two domains and carefully analyse the suggested annotations. We find that suggestions have positive effects on annotation speed and performance, while not introducing noteworthy biases. Envisioning suggestion models that improve with newly annotated texts, we contrast methods for continuous model adjustment and suggest the most effective setup for suggestions in future expert tasks. 1 Introduction Current deep learning methods require large amounts of training data to achieve reasonable performance. Scalable solutions to acquire labelled data use crowdsourcing (e.g., Potthast et al., 2018), gamification (Ahn, 2006), or incidental supervision (Roth, 2017). For many complex tasks in expert domains, such as law or medicine, this is, however, not an option since crowdworkers and gamers lack the necessary expertise. Annotating data manually is therefore often the only way to train a model for tasks aiding experts with their work. But the more expertise an annotation task requires, the more time- and funding-intensive it typically is, which is why many projects suffer from small corpora and deficient models. In this paper, we propose and analyse an annotation setup aiming to increase the annotation speed and ease for a discourse-level sequence labelling task requiring extensive domain expertise, without sacrificing annotation quality. For the first time, we study the effects of automatically suggesting annotations to expert annotators in a task that is hard for both humans (only moderate agreement) and machine learning models (only mediocre performance) and compare the effects across different domains and suggestion models. We furthermore investigate how the performance of the models changes if they continuously learn from expert annotations. As our use case, we consider the task of annotating epistemic activities in diagnostic reasoning texts, which was recently introduced by Schulz et al. (2018, 2019). The task is theoretically grounded in the learning sciences (Fischer et al., 2014) and enables innovative applications that teach diagnostic skills to university students based on automatically generated feedback about their reasoning processes. This task is an ideal choice for our investigations, since it is novel, with limited resources and experts available, and so far neural prediction models only achieve an F1 score of 0.6, while also human agreement is in a mid range around α = 0.65. Schulz et al. (2018) created annotated corpora of epistemic activities for 650 texts in the medicine domain (MeD) and 550 in the school teaching domain (TeD). We extend these corpora by 457 and 394 texts, respectively. As a novel component, half of the domain expert annotators receive automatically generated annotation suggestions. That is, the annotation interface features texts with (suggested) annotations rather than raw texts. Annotators can accept or reject the suggested annotations as well as add new ones, as in the standard annotation setup. Based on the collected data, we investigate the effects of these suggestions in terms of inter- and intra-annotator agreement, annotation time, sug2762 gestion usefulness, annotation bias, and the type of suggestion model. As our analysis reveals positive effects, we additionally investigate training suggestion models that learn continuously as new data becomes available. Such incremental models can benefit tasks with no or little available data. Our work is an important step towards our vision that even hard annotation tasks in expert domains, requiring extensive training and discourselevel context, can be annotated more efficiently, thus advancing applications that aid domain experts in their work. Besides epistemic activities, discourse-level expert annotation tasks concern, for example, legal documents (Nazarenko et al., 2018), psychiatric patient–therapist interactions (Mieskes and Stiegelmayr, 2018), or transcripts of police body cameras (Voigt et al., 2017). The contributions of our work are: (1) We study the effects of automatically suggesting annotations to expert annotators across two domains for a hard discourse-level sequence labelling task. (2) We learn incremental suggestion models for little data scenarios through continuous adjustments of the suggestion model and discuss suitable setups. (3) We publish new diagnostic reasoning corpora for two domains annotated with epistemic activities.1 2 Related Work Annotation Suggestions Previous work on automatic annotation suggestion (sometimes called pre-annotation) focused on token- or sentencelevel annotations, including the annotation of partof-speech tags (Fort and Sagot, 2010), syntactic parse trees in historical texts (Eckhoff and Berdicevskis, 2016), and morphological analysis (Felt et al., 2014). A notable speed-up of the annotation could be observed in these tasks, up to 70 % (Felt et al., 2014). However, Fort and Sagot (2010) find that annotation suggestions also biased the annotators’ decisions. Rosset et al. (2013) instead report no clear bias effects for their pre-annotation study of named entities. Ulinski et al. (2016) investigate the effects of different suggestion models for dependency parsing. They find that models with an accuracy of at least 55 % reduce annotation time. Our work focuses on a different class of tasks, namely hard discourse-level tasks, in which the expert annotators only achieve a moderate agreement. 1https://tudatalib.ulb.tu-darmstadt. de/handle/tudatalib/2001 Annotation tasks in the medical domain are related to our use case in diagnostic reasoning. Lingren et al. (2014) suggest medical entity annotations in clinical trial announcements. Kholghi et al. (2017) also investigate medical entity annotation, using active learning for their suggestion model, which results in a speed-up of annotation time. South et al. (2014) use automatic suggestions for de-identification of medical texts and find no change in inter-annotator agreement or annotation time. In contrast to these works, we use a control group of two annotators, who never receive suggestions, and compare the performance of all annotators to previous annotations they performed without annotation suggestions. Work on the technical implementation of annotation suggestions is also still focused on word- or sentence-level annotation types. Meurs et al. (2011) use the GATE annotation framework (Bontcheva et al., 2013) for suggestions of biological entities. Yimam et al. (2014) describe the WebAnno system and discuss suggestions of part-of-speech tags and named entities using the MIRA algorithm (Crammer and Singer, 2003) for suggestion generation. Skeppstedt et al. (2016) introduce the PAL annotation tool, which provides suggestions and active learning for entities and chunks generated by logistic regression and SVMs. Greinacher and Horn (2018) present the annotation platform DALPHI, suggesting named entity annotations based on a recurrent neural network. Documents to be annotated are chosen by means of active learning, enabling continuous updates of the suggestion model during the annotation process. We also investigate continuous updates of the suggestion model during the annotation process, but focus on a task in which annotators require vast training and domain expertise. Continuous Model Adjustment Read et al. (2012) distinguish two ways of training a model when new data becomes available continuously: using batches or single data points for the continuous adjustment, the latter often being referred to as online learning. We experiment with both adjustment strategies. P´erez-S´anchez et al. (2010) propose incrementally adjusting a neural network as new data becomes available, i.e. only using the newly available data for the update. In addition to using incremental training, we also experiment with cumulative training, where both previously available and new data is used for the model 2763 The patient reports to be lethargic and feverish. From the anamnesis I learned that he had purulent tonsilitis and is still suffering from symptoms. I first performed some laboratory tests and notice the decreased number of lymphocytes, which can be indicative of a bone marrow disease or an HIV infection. The HIV test is positive. However, the results from the blood cultures are negative, so it is a virus, parasite, or a fungal infection causing the symptoms. Figure 1: Exemplary diagnostic reasoning text from the medicine domain, annotated with epistemic activity segments: evidence generation, evidence evaluation, drawing conclusions, hypothesis generation. adjustment. Andrade et al. (2017), Castro et al. (2018), and Rusu et al. (2016) investigate adapting neural networks to new data with additional classes or even new tasks, requiring to change the structure of the neural network. Our setting is less complex as the neural network is trained on all possible classes from the beginning. Recent work also investigates pre-training neural networks before training them on the actual data (Garg et al., 2018; Shimizu et al., 2018; Serban et al., 2016). The model is thus adapted only once instead of continuously as in our work. 3 Diagnostic Reasoning Task The annotation task proposed by Schulz et al. (2018) has interesting properties for studying the effects of annotation suggestions in hard expert tasks: (1) A small set of annotated data is available for two different domains. (2) Other than in wellunderstood low-level tasks, such as part-of-speech tagging or named entity recognition, the expert annotators require the discourse context to identify epistemic activities. This is a hard task yielding only inter-rater agreement scores in a mid range. (3) Prediction models only achieve F1 scores of around 0.6, which makes it unclear if the suggestion quality is sufficient. The previously annotated data consists of 650 German texts in the medical domain (MeD) and 550 texts in the teacher domain (TeD). The texts were written by university students working on online case simulations, in which they had to diagnose the disease of a fictional patient (MeD), or the cause of behavioural problems of a fictional pupil (TeD) based on dialogues, observations, and test results. For each case simulation, the students explained their diagnostic reasoning process in a brief self-explanation text.2 Five (MeD) and four (TeD) domain experts annotated individual reasoning steps in the anonymised texts in terms of the epistemic activ2Study approved by the university’s ethics commission. ities (Fischer et al., 2014), i.e. activities involved in reasoning to develop a solution to a problem in a professional context. We focus on the four most frequently used epistemic activities for our annotations: hypothesis generation (HG), evidence generation (EG), evidence evaluation (EE), and drawing conclusions (DC). HG is the derivation of possible diagnoses, which initiates the reasoning process. EG constitutes explicit statements of obtaining evidence from information given in the case simulation or of recalling own knowledge. EE is the mentioning of evidence considered relevant for diagnosis. Lastly, DC is defined as the derivation of a final diagnosis, which concludes the reasoning process. As shown in Figure 1, the annotation of these epistemic activities is framed as a joint discourselevel segmentation and classification task, that is, epistemic activities are segments of arbitrary length not bound to phrase or sentence level. 4 Annotating with Suggestions To conduct our experiments with annotation suggestions, we use the same annotation task and platform as Schulz et al. (2018). We obtained their original data as well as further anonymised reasoning texts and we asked their expert annotators to participate in our annotation experiments. This allows us to study the effects of annotating data with and without suggestions, without having to account for changes in annotation performance due to individual expert annotators. In total, we annotate 457 (MeD) and 394 (TeD) new reasoning texts during our experiments. Figure 2 shows an overview of our annotation phases in MeD with the five expert annotators A1 to A5. S1 and S2 indicate the previous annotation phases by Schulz et al. (2018). In their work, all experts first annotated the same texts (S1) and then a different set of texts each (S2). In our work, all experts annotate the same texts in all phases. We provide annotation suggestions to annotators A1 2764 S1 S2 O1 O2 O3.1 O3.2 O4.1 O4.2 pers3 univ univ univ A1 A2 A3 A4 A5 univ univ univ univ univ univ pers1 pers2 Our Work Schulz et al. Figure 2: Annotation setup in MeD: Red indicates suggestions by a univ(ersal) or pers(onalised) model. The dashed boxes indicate annotations of texts that were already annotated in S1 or O1. to A3 (randomly chosen among the five annotators) and instruct them to only accept epistemic activities if these coincide with what they would have annotated without suggestions and else manually annotate the correct text spans. We study the effectiveness of the suggestions (O1), the intraannotator consistency (O2), the annotation bias induced by suggestions (O3), and the effectiveness of a personalised suggestion model (O4). Annotators A4 and A5 act as a control group, never receiving suggestions. We use an analogous setup for TeD except that there is no annotator A3. To create gold standard annotations, we use majority voting and annotator meetings as Schulz et al. (2018), and we publish our final corpora. 4.1 Implementation Annotation Tool Since we work with the same expert annotators as Schulz et al. (2018), we choose to also use the same annotation platform, INCEpTION (Klie et al., 2018), so that the expert annotators are already familiar with the interface. INCEpTION furthermore provides a rich API to integrate our suggestion models. As shown in Figure 3, annotation suggestions are shown in grey, distinguishing them clearly from differently coloured manual annotations. Suggestions can be easily accepted or rejected by single or double clicking. Additionally, manual annotations can be created as usual. Figure 3: Annotation suggestion (grey) and accepted suggestion (orange) in the INCEpTION platform. Suggestion Models To suggest annotations, we use a state-of-the-art BiLSTM network with a conditional random field output layer (Reimers and Gurevych, 2017), which has proven to be a suitable architecture for related tasks (Ajjour et al., 2017; Eger et al., 2017; Levy et al., 2018). We train this model using the gold standard of Schulz et al. (2018), consisting of annotations for all texts from phases S1 and S2. The learning task is framed as standard sequence labelling with a BIOencoding (Begin, Inside, Outside of a sequence) for the four epistemic activities hypothesis generation (HG), evidence generation (EG), evidence evaluation (EE), and drawing conclusions (DC). More precisely, each token is assigned one of the labels ({B, I} × {HG, EG, EE, DC}) ∪{O}, where B-HG denotes the first token of a HG segment, I-HG denotes a continuation token of a HG segment (similarly for EG, EE, and DC), and O denotes a token that is not part of any epistemic activity.3 We use this suggestion model in O1– O3.1 and call it universal (univ), as it learns labels obtained from all annotators of a domain. For annotation phase O4.1, we train a personalised (pers) suggestion model for each annotator A1–A3, based on the epistemic activities identified by the respective annotator in phases S1 and S2. A personalised model thus provides suggestions tailored to a specific annotator. The idea of personalised models is that they may enable each annotator to accept more suggestions than possible with the universal model, which may lead to a speed-up in annotation time. Note, however, that each of these personalised models is trained using only 250 texts, 150 annotated by the respective annotator in S1 and 100 in S2. Instead, the universal model is trained using 650 (MeD) or 550 (TeD) texts. We train ten models with different seeds for each setup (universal and three personalised for MeD and TeD), applying the same parameters for all of them: one hidden layer of 100 units, variational dropout rates for input and hidden layer of 0.25, and the nadam optimiser (Dozat, 2016). We furthermore use the German fastText word embeddings (Grave et al., 2018) to represent the input. We apply early stopping after five epochs without improvement. For the actual suggestions in our experiments, we choose the model with the best performance among the ten for each setup. 3We utilise the non-overlapping gold annotations of Schulz et al. (2018), where a preference order over epistemic activities was applied to avoid overlapping segments. 2765 4.2 Suggestion Quality Epistemic activity identification is a particularly hard discourse-level sequence labelling task, both for expert annotators and machine learning models. Before beginning with our annotation experiments, we evaluate our different suggestion models, as shown in Table 1. All models exhibit midrange prediction capabilities, which we consider sufficient for automatic annotation suggestions. This is supported by Greinacher and Horn (2018), who find that suggestion models with an accuracy of at least 50 % improve annotation performance and speed for named entity recognition. Still, the overall performance for our task is clearly lower than in low-level tasks such as part-of-speech tagging, for which suggestions have been studied. Domain Test Data univ pers1 pers2 pers3 MeD gold data 0.63 0.51 0.58 0.55 ann. data — 0.51 0.60 0.58 TeD gold data 0.55 0.54 0.48 — ann. data — 0.60 0.49 — Table 1: Macro-F1 scores of the univ and pers models used in our experiments, evaluated on the gold and respective annotator-specific (ann.) annotations. We evaluate the performance of the personalised models using both the annotations by the respective annotator and the gold annotations. The overall lower performance on the gold data shows that the personalised models indeed learn to predict the annotation style of the respective annotator. We also observe lower performance of the personalised models compared to the universal models, which can be attributed to the smaller amount of annotated texts used for training. 4.3 Evaluation and Findings In this section, we examine the effects of annotation suggestions in detail, considering interannotator agreement, intra-annotator consistency, annotation bias and speed, as well as usefulness of suggestions and the impact of universal versus personalised suggestion models. Effectiveness of Suggestions Since the annotation of epistemic activities involves determining spans as well as labels, we measure the interannotator agreement (IAA) in terms of Krippendorff’s αU (Krippendorff, 1995) as implemented in DKPro Agreement (Meyer et al., 2014). To evaluate the effects of suggestions on the annotations of our experts, we compare the IAA between annotators with suggestions (A1–A3) – henceforth called the SUGGESTION group – against the IAA between annotators without suggestions (A4–A5) – denoted as the STANDARD group. Table 2 details the IAA of the two groups across all annotation phases described in Figure 2. MeD TeD Phase ST SU SU/ST ST SU SU/ST S1 0.65 0.67 0.67 0.65 0.64 0.65 O1 0.71 0.73 0.70 0.73 0.77 0.73 O2 0.66 0.69 0.64 0.66 0.76 0.67 O3.1 0.60 0.60 0.59 0.73 0.80 0.71 O3.2 0.57 0.62 0.65 0.64 0.66 0.65 O4.1 -0.47 0.43 0.21 0.67 0.72 0.65 O4.2 0.60 0.68 0.60 0.67 0.74 0.71 O1–O4 0.48 0.64 0.59 0.69 0.75 0.68 Table 2: Inter-annotator agreement in terms of Krippendorff’s αU for ST(ANDARD) and SU(GGESTION) and their inter-group agreement (SU/ST). Bold: Phases in which models were used for SU. First, we compare the overall IAA of both groups for the previous annotation phase S1 by Schulz et al. (2018) and all of our annotation phases O1–O4.2. We observe for TeD that the IAA of the SUGGESTION group is consistently higher than of the STANDARD group, as soon as annotators receive suggestions (starting in O1). Since the IAAs of the two groups were similar in S1, when no suggestions were given, we deduce that suggestions cause less annotation discrepancies between annotators in TeD. Below, we will investigate if this also introduces an annotation bias. For MeD, results are less clear, since the SUGGESTION group achieves only slightly higher IAA scores in most phases. Notable is the extreme outlier of the STANDARD group in O4.1. This is due to one annotator, whose EE (evidence evaluation) annotations deviated substantially from the other annotators. Considering the average IAA of our experiments without O4.1, we obtain very similar scores for the STANDARD (0.63) and SUGGESTION (0.66) group. Thus, there is little difference to reference phase S1, where SUGGESTION already yielded a 0.02 higher IAA. However, below we discuss the helpfulness and time saving of suggestions even in MeD. Intra-Annotator Consistency In O2, we mixed 100 new texts with 50 texts the annotators saw pre2766 viously during S1 or O1, but we did not inform the annotators about this setup. Table 3 shows the annotation consistency of each annotator in terms of intra-annotator agreement computed on those 50 double-annotated texts. Even a single annotator shows annotation discrepancies instead of perfect consistency, evidencing the difficulty of annotating epistemic activities. Since the intra-annotator agreement for annotators with suggestions (A1– A3) is similar to that without (A4–A5), we conclude that suggestions do not considerably change annotators’ annotation decisions. SUGGESTION STANDARD A1 A2 A3 A4 A5 av. MeD 0.74 0.76 0.79 0.78 0.80 0.77 TeD 0.77 0.64 — 0.72 0.70 0.71 Table 3: Intra-annotator agreement (in terms of Krippendorff’s αU) on double-annotated texts. Annotation Bias The higher IAA in the SUGGESTION compared to the STANDARD group in TeD may indicate an annotation bias, i.e. a tendency that the SUGGESTION group prefers the predicted labels over the actual epistemic activities. We test this unwanted effect by comparing the human–machine agreement between the experts’ annotations and the models’ predictions (in terms of Krippendorff’s αU) for both annotators with and without suggestions. Table 4 shows that, in both MeD and TeD, annotators who receive suggestions, i.e. SUGGESTION in O1–O3.1 and in O4.1, consistently have a slightly higher agreement of about 0.1 than annotators without suggestions in these phases. This indicates an annotation bias due to suggestions. In MeD, this bias is preserved even if annotators do not receive suggestions anymore (SUGGESTION in O3.2 and O4.2), whereas in TeD the bias fades. To further examine the gravity of the annotation bias, we compute the inter-group agreement, i.e. the average pairwise IAA between annotators with and without suggestions, denoted SU/ST in Table 2. We find that this agreement is similar to the agreement within the STANDARD group for both MeD and TeD. In other words, an annotator with and an annotator without suggestions have the same level of agreement as two annotators without suggestions. As a next step, we analyse the differences in the label distributions of the predictions and the SUGMeD TeD SU ST diff. SU ST diff. univ S1 0.65 0.67 –0.02 0.55 0.52 +0.03 O1 0.64 0.56 +0.08 0.52 0.42 +0.10 O2 0.55 0.48 +0.07 0.50 0.42 +0.08 O3.1 0.69 0.55 +0.14 0.54 0.40 +0.14 O3.2 0.52 0.45 +0.07 0.51 0.49 +0.02 O4.1 0.46 0.33 +0.13 0.47 0.39 +0.08 O4.2 0.53 0.49 +0.04 0.40 0.40 +0.00 pers O4.1 0.42 0.30 +0.12 0.49 0.41 +0.08 O4.2 0.41 0.45 –0.04 0.34 0.32 +0.02 Table 4: Average αU of annotators (in SU(GGESTION) and ST(ANDARD)) with predictions of the univ and their pers model and diff(erence) between the groups. Bold: Phases in which models were used for SU. GESTION and STANDARD annotations. In MeD, the SUGGESTION annotators use EE (evidence evaluation) labels slightly more often, which can also be observed for the predictions. In TeD, the SUGGESTION annotators use fewer EE labels, but more HG (hypothesis generation) labels than STANDARD annotators, which again matches the tendency of the predicted labels. This effect is, however, very small, since all label distributions are close to each other. The Jensen-Shannon divergence (JSD) between the label distributions of the two annotator groups is consistently below 0.02 in all suggestion phases (O1–O3.1, O4.1) with an average JSD of 0.011 (MeD) and 0.009 (TeD). There is almost no difference to the JSD of the remaining phases (0.009 for MeD, 0.010 for TeD), indicating that the difference between the groups cannot be attributed to the suggestions. We also compute the JSD of the SUGGESTION group and the predictions as well as the JSD of the STANDARD group and the predictions and find an average difference of the JSDs of −0.009 for MeD and < 0.001 for TeD, which indicates a small bias towards the suggested labels for MeD, but no obvious bias for TeD. We finally analyse the disagreement within both groups of annotators. Figure 4 shows the distribution of the disagreements for TeD’s SUGGESTION (left) and STANDARD group (right). We note that most disagreement occurs for EE labels. This is not surprising, as EE is the most frequently occurring label. The SUGGESTION group has a slightly higher disagreement for the DC (drawing conclusions) and HG labels, but overall, we do not observe substantial changes in the disagreement distribution, as also the disagreement for phases with2767 SU EG EE DC HG ST EG EE DC HG EG 4% 0% 3% EG 4% 1% 1% EE 4% 19% 15% EE 4% 21% 18% DC 0% 19% 9% DC 1% 21% 5% HG 3% 15% 9% HG 1% 18% 5% Figure 4: Disagreement among TeD annotators of the SU(GGESTION) and ST(ANDARD) groups in phases with suggestions models (O1–O3.1 and O4.1). out suggestions is up to 3 percentage points different between the two groups. For MeD, we find even smaller differences between the two groups. Based on all analyses, we consider the annotation bias negligible, since suggestions do not cause negative annotation discrepancies compared to the standard annotation setup without suggestions. Annotation Time Table 5 shows that nearly all annotators performed annotations faster in our experiments compared to previous annotations by Schulz et al. (2018), which can be attributed to the annotation experience they collected. We note that annotators in the SUGGESTION group (A1–A3) always speed up compared to previous annotations, whereas some of the annotators in the STANDARD group (A4–A5) slow down. Furthermore, on average, annotators in the SUGGESTION group exhibit a higher speed-up of annotation time: A1–A3 have a speed-up of 35 % compared to only 21 % for A4–A5 in MeD, and 20 % compared to only 11 % in TeD. Thus, suggestions make the annotation of epistemic activities more efficient. SUGGESTION STANDARD Phase A1 A2 A3 A4 A5 MeD S1–S2 1.92 2.13 1.82 3.78 1.94 O1–O4 0.88 1.60 1.29 2.46 2.05 speed-up 54 % 25 % 29 % 36 % −6 % TeD S1–S2 2.73 2.91 — 2.57 2.31 O1–O4 1.81 2.70 — 2.76 1.59 speed-up 34 % 7 % — −8 % 31 % Table 5: Average annotation time per text (in minutes) and speed-up of our compared to previous annotations. Usefulness of Suggestions In addition to positive informal feedback from the SUGGESTION annotators about the usefulness of suggestions, we also perform an objective evaluation. As a new metric of usefulness, we propose the acceptance rate of suggestions. Table 6 shows that on average 56 % of the suggestions are accepted by the expert annotators in MeD and 54 % in TeD. Closer analysis reveals that in the many rejected cases, only the segment boundaries of suggestions were incorrect. This leads us to conclude that suggestions ease the difficult task of annotating epistemic activities. O1 O2 O3.1 O4.1 av. MeD 58 % 49 % 62 % 54 % 56 % TeD 59 % 55 % 60 % 43 % 54 % Table 6: Percentage of accepted suggestions. Personalised versus Universal Both in MeD and TeD, Table 2 shows a lower IAA in the SUGGESTION group when suggestions are given by a personalised model (O4.1) compared to the universal model (O1–O3.1). This can be explained by the fact that annotators are biased (see Table 4, O4.1 pers) towards different annotations due to suggestions by different personalised models. We observe that annotators also accept fewer suggestions from the personalised than from the universal models (see Table 6), which can be attributed to the worse prediction performance of the personalised models (see Table 1). We conclude that our universal models exhibit more positive effects than the personalised models, as our goal is to create a gold standard corpus. Discussion Our annotation study shows that annotation suggestions have various positive effects on the annotation of epistemic activities, despite the mediocre performance of our suggestion models. In particular, the agreement between annotators in TeD is increased without inducing a noteworthy annotation bias, and annotation time decreases in both MeD and TeD. Since the task of epistemic activity identification is a particularly hard one, both for humans and for machine learning models, we expect that the positive effects of annotation suggestions generalise to other discourse-level sequence labelling tasks. 5 Training Suggestion Models The previous section established that annotation suggestions have positive effects on annotating epistemic activities. However, these suggestions were only possible since Schulz et al. (2018) had already annotated 550 reasoning texts in TeD and 650 in MeD, which were used to train our suggestion models. Envisioning suggestions for similar tasks with fewer or even no existing annotations, this section simulates suggestions of our universal 2768 models in this scenario. We experiment with different methods of training our models with only a small number of ‘already annotated’ texts and then continuously adjusting the models when ‘newly annotated’ texts become available. 5.1 Approach We use the gold annotations of Schulz et al. (2018) for our experiments. The ongoing annotation of texts and the continuously increasing amount of available training data can be simulated as a (random) sequence S of texts ti becoming available at each time step i, i.e. S = t1, t2, . . . , tn. In addition to model adjustments at every time step, representing an online learning setup, we experiment with adjusting our models using bundles of texts (called batches by Read et al. (2012)). The models are thus only adjusted after each jth time step, where j is the bundle size. We experiment with bundle sizes 10, 20, 30, 40, and 50 and represent the single-step setup as bundle size 1. The easiest way to adjust a suggestion model for each new bundle is to train a new model from scratch using the union of the new and all previously available bundles. We call this adjustment method RETRAIN and use bundle size 50. As a more advanced method, we suggest repeatedly training the existing model every time a new bundle of texts becomes available, i.e. the weights of the model are updated with each new bundle. We contrast two strategies for updating the model: the cumulative method (CUM) uses the union of the new and all previously available bundles of texts for training, whereas the incremental method (INC) uses only the new bundle. For all model adjustment experiments, we use the architecture of our suggestion models described in Section 4.1. We report the average performance over ten runs for each setup (adjustment method, bundle size, domain). Our text sequence S has length 270. All models in the CUM and INC setup are initially trained on 10 texts before the repeated training with particular bundle sizes. 5.2 Results and Evaluation We observe similar trends for MeD and TeD and therefore only present our MeD results in detail. Model Performance Figure 5 shows the macroF1 scores for the different adjustment methods with various bundle sizes. Using CUM, performance is very similar for all bundle sizes (1–50), 0 50 100 150 200 250 0.3 0.4 0.5 0.6 number of texts available CUM INC 10 INC 30 INC 50 INC 1 RETRAIN Figure 5: Macro-F1 after each adjustment using different methods and bundle sizes in MeD. thus represented by a single line in Figure 5. INC with bundle sizes 20 and 40 are omitted from the figure for readability. We observe that repeatedly training the model with CUM yields the same performance as RETRAIN, i.e. as training a new model from scratch for every new bundle. Furthermore, the performance of CUM rapidly increases with each bundle for the first 70 texts, reaching 0.5 macro-F1. The performance increase is more gradual thereafter, reaching 0.6 after 270 texts. Using INC for repeated training, bundle size influences performance: A small bundle size of 1 to 20 results in unsteady performance, which increases in the long-run but shows decreases after training on some of the bundles. In contrast, bundle sizes of 30 and higher show a steady increase in performance, similar to CUM. However, after having trained on at least 70 texts, INC adjustments with a bundle size smaller than 50 yield lower performance results than CUM adjustments. We conclude that to provide annotation suggestions, repeatedly training a model using INC with a bundle size of 30 or more can be a suitable alternative to CUM as well as to training models from scratch whenever new annotations become available, since the performance sacrifice is small. Training Time Having observed only slight differences in the model performance using our different adjustment methods, Figure 6 reveals a clear distinction regarding the time needed to adjust to each new bundle (trends of bundle sizes not illustrated lie between those in the figure). While the training time using CUM increases with each new 2769 0 50 100 150 200 250 0 1 2 3 4 number of texts available CUM 10 CUM 50 INC 10 INC 50 Figure 6: Training time (in minutes) for each adjustment using different methods and bundle sizes in MeD. bundle, since each successive adjustment is performed with more data, the training time of INC decreases with each bundle, until reaching a stable minimum ranging from 8 seconds for bundle size 10 to 47 seconds for bundle size 50. This decrease in training time, despite the stable amount of data used for each adjustment, is due to a decrease in the number of epochs required for training and indicates that the texts used in previous training steps are beneficial for training the model on a completely new bundle of texts. The RETRAIN method, not illustrated in the figure, requires far more time for adjustment than the repeated training methods. Training (from scratch) for 50 texts already takes 4.5 minutes, i.e. more than the CUM adjustment with 270 texts, and training for 270 texts takes 7.5 minutes. Discussion Our results show that INC adjustments are the most time-efficient, with each adjustment being two to five times faster than CUM adjustments. In fact, in the CUM online learning setup (bundle size 1), the model adjustment time is similar to, and after 100 documents higher than the time needed for annotation (1–2 minutes per text as shown in Table 5). However, the adjustment times of CUM with a bundle size of 10 or higher and of RETRAIN, are lower than the time needed for annotating the respective bundle of texts. Thus, CUM training with bundles larger than 1 is feasible for continuously adjusting suggestion models in our annotation task (while only a small amount of data is available), despite the long training time compared to INC. Since CUM achieves the same performance results as RETRAIN but needs far less time for adjustment, we dismiss RETRAIN as a suitable method for training suggestion models. 6 Conclusion We presented the first study of annotation suggestions for discourse-level sequence labelling requiring expert annotators, using the hard task of epistemic activity identification as an example. Our results show that even mediocre suggestion models have a positive effect in terms of agreement between annotators and annotation speed, while annotation biases are negligible. Based on our experiments on training suggestion models, we propose for future annotation studies that annotation suggestions can be given after having annotated only a small amount of data (in our case 70 texts), which ensures a sufficient model performance (0.5 macro-F1). Since the exact number of texts required to reach sufficient model performance depends on the task, we suggest using continuous model adjustments from the start, ensuring flexibility as to when to start giving suggestions (namely whenever sufficient performance is achieved). If computational resources are an important factor, we propose the usage of INC training with a bundle size of 30 or higher to optimise performance and training time. If model performance is more important, we recommend CUM training using a small bundle size of 10 or 20 to improve suggestions in short intervals. In our model adjustment experiments, we used gold annotations. To create them on the fly, annotation aggregation methods for sequence labelling (Simpson and Gurevych, 2018) can be used. We expect our work to have a large impact on future work requiring expert annotations, in particular regarding new tasks with no or little available data, for example for legal (Nazarenko et al., 2018), chemical (Guo et al., 2014), or psychiatric (Mieskes and Stiegelmayr, 2018) text processing. Acknowledgements This work was supported by the German Federal Ministry of Education and Research (BMBF) under the reference 16DHL1040 (FAMULUS). We thank our annotators M. Achtner, S. Eichler, V. Jung, H. Mißbach, K. Nederstigt, P. Sch¨affner, R. Sch¨onberger, and H. Werl. We also acknowledge Samaun Ibna Faiz for his contributions to the model adjustment experiments. 2770 References Luis von Ahn. 2006. Games with a Purpose. Computer, 39(6):92–94. Yamen Ajjour, Wei-Fan Chen, Johannes Kiesel, Henning Wachsmuth, and Benno Stein. 2017. Unit segmentation of argumentative texts. In Proceedings of the 4th Workshop on Argument Mining (ArgMin), pages 118–128, Copenhagen, Denmark. Mariela Andrade, Eduardo Gasca, and Er´endira Rend´on. 2017. Implementation of incremental learning in artificial neural networks. In Proceedings of the 3rd Global Conference on Artificial Intelligence (GCAI), volume 50 of EPiC Series in Computing, pages 221–232. Kalina Bontcheva, Hamish Cunningham, Ian Roberts, Angus Roberts, Valentin Tablan, Niraj Aswani, and Genevieve Gorrell. 2013. GATE Teamware: a web-based, collaborative text annotation framework. Language Resources and Evaluation, 47(4):1007– 1029. Francisco M. Castro, Manuel J. Marin-Jimenez, Nicolas Guil, Cordelia Schmid, and Karteek Alahari. 2018. End-to-end incremental learning. In Proceedings of the 15th European Conference on Computer Vision (ECCV), pages 241–257, Munich, Germany. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. Timothy Dozat. 2016. Incorporating nesterov momentum into adam. In ICLR 2016 Workshop Track, San Juan, Puerto Rico. Hanne Martine Eckhoff and Aleksandrs Berdicevskis. 2016. Automatic parsing as an efficient preannotation tool for historical texts. In Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH), pages 62–70, Osaka, Japan. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural End-to-End Learning for Computational Argumentation Mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 11–22, Vancouver, Canada. Paul Felt, Eric K. Ringger, Kevin Seppi, Kristian S. Heal, Robbie A. Haertel, and Deryle Lonsdale. 2014. Evaluating machine-assisted annotation in under-resourced settings. Language Resources and Evaluation, 48(4):561–599. Frank Fischer, Ingo Kollar, Stefan Ufer, Beate Sodian, Heinrich Hussmann, Reinhard Pekrun, Birgit Neuhaus, Birgit Dorner, Sabine Pankofer, Martin R. Fischer, Jan-Willem Strijbos, Moritz Heene, and Julia Eberle. 2014. Scientific Reasoning and Argumentation: Advancing an Interdisciplinary Research Agenda in Education. Frontline Learning Research, 4:28–45. Kar¨en Fort and Benoˆıt Sagot. 2010. Influence of Pre-Annotation on POS-Tagged Corpus Development. In Proceedings of the 4th Linguistic Annotation Workshop (LAW), pages 56–63, Uppsala, Sweden. Saurabh Garg, Tanmay Parekh, and Preethi Jyothi. 2018. Code-switched Language Models Using Dual RNNs and Same-Source Pretraining. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3078–3083, Brussels, Belgium. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC), pages 3483–3487, Miyazaki, Japan. Robert Greinacher and Franziska Horn. 2018. The DALPHI annotation framework & how its preannotations can improve annotator efficiency. arXiv:1808.05558. Yufan Guo, Diarmuid ´O S´eaghdha, Ilona Silins, Lin Sun, Johan H¨ogberg, Ulla Stenius, and Anna Korhonen. 2014. Crab 2.0: A text mining tool for supporting literature review in chemical cancer risk assessment. In Proceedings of the 25th International Conference on Computational Linguistics (COLING): System Demonstrations, pages 76–80, Dublin, Ireland. Mahnoosh Kholghi, Laurianne Sitbon, Guido Zuccon, and Anthony Nguyen. 2017. Active learning reduces annotation time for clinical concept extraction. International Journal of Medical Informatics, 106:25–31. Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. 2018. The INCEpTION Platform: MachineAssisted and Knowledge-Oriented Interactive Annotation. In Proceedings of the 27th International Conference on Computational Linguistics (COLING): System Demonstrations, pages 5–9, Santa Fe, NM, USA. Klaus Krippendorff. 1995. On the Reliability of Unitizing Continuous Data. Sociological Methodology, 25:47–76. Ran Levy, Ben Bogin, Shai Gretz, Ranit Aharonov, and Noam Slonim. 2018. Towards an argumentative content search engine using weak supervision. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 2066– 2081, Santa Fe, NM, USA. Todd Lingren, Louise Deleger, Katalin Molnar, Haijun Zhai, Jareen Meinzen-Derr, Megan Kaiser, Laura Stoutenborough, Qi Li, and Imre Solti. 2014. Evaluating the impact of pre-annotation on annotation speed and potential bias: natural language processing gold standard development for clinical named 2771 entity recognition in clinical trial announcements. Journal of the American Medical Informatics Association, 21(3):406–413. Marie-Jean Meurs, Caitlin Murphy, Nona Naderi, Ingo Morgenstern, Carolina Cantu, Shary Semarjit, Greg Butler, Justin Powlowski, Adrian Tsang, and Ren´e Witte. 2011. Towards evaluating the impact of semantic support for curating the fungus scientic literature. In Proceedings of the 3rd Canadian Semantic Web Symposium (CSWS), pages 34–39, Vancouver, Canada. Christian M. Meyer, Margot Mieskes, Christian Stab, and Iryna Gurevych. 2014. DKPro Agreement: An Open-Source Java Library for Measuring InterRater Agreement. In Proceedings of the 25th International Conference on Computational Linguistics: System Demonstrations (COLING), pages 105–109, Dublin, Ireland. Margot Mieskes and Andreas Stiegelmayr. 2018. Preparing Data from Psychotherapy for Natural Language Processing. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC), pages 2896–2902, Miyazaki, Japan. Adeline Nazarenko, Franc¸ois Levy, and Adam Wyner. 2018. An Annotation Language for Semantic Search of Legal Sources. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC), pages 1096–1100, Miyazaki, Japan. Beatriz P´erez-S´anchez, Oscar Fontenla-Romero, and Bertha Guijarro-Berdi˜nas. 2010. An incremental learning method for neural networks in adaptive environments. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), pages 1–8, Barcelona, Spain. Martin Potthast, Tim Gollub, Kristof Komlossy, Sebastian Schuster, Matti Wiegmann, Erika Patricia Garces Fernandez, Matthias Hagen, and Benno Stein. 2018. Crowdsourcing a Large Corpus of Clickbait on Twitter. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 1498–1507, Santa Fe, NM, USA. Jesse Read, Albert Bifet, Bernhard Pfahringer, and Geoff Holmes. 2012. Batch-incremental versus instance-incremental learning in dynamic and evolving data. In Advances in Intelligent Data Analysis XI, pages 313–323. Berlin/Heidelberg: Springer. Nils Reimers and Iryna Gurevych. 2017. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 338–348, Copenhagen, Denmark. Sophie Rosset, Cyril Grouin, Thomas Lavergne, Mohamed Ben Jannet, J´er´emy Leixa, Olivier Galibert, and Pierre Zweigenbaum. 2013. Automatic named entity pre-annotation for out-of-domain human annotation. In Proceedings of the 7th Linguistic Annotation Workshop & Interoperability with Discourse, pages 168–177, Sofia, Bulgaria. Dan Roth. 2017. Incidental Supervision: Moving beyond Supervised Learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI), pages 4885–4890, San Francisco, CA, USA. Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive Neural Networks. arXiv:1606.04671. Claudia Schulz, Christian M. Meyer, and Iryna Gurevych. 2019. Challenges in the Automatic Analysis of Students’ Diagnostic Reasoning. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI), Honolulu, HI, USA. (to appear). Claudia Schulz, Christian M. Meyer, Michael Sailer, Jan Kiesewetter, Elisabeth Bauer, Frank Fischer, Martin R. Fischer, and Iryna Gurevych. 2018. Challenges in the Automatic Analysis of Students’ Diagnostic Reasoning. arXiv:1811.10550. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building End-to-end Dialogue Systems Using Generative Hierarchical Neural Network Models. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), pages 3776–3783, Phoenix, AZ, USA. Toru Shimizu, Nobuyuki Shimizu, and Hayato Kobayashi. 2018. Pretraining Sentiment Classifiers with Unlabeled Dialog Data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 764–770, Melbourne, Australia. Edwin Simpson and Iryna Gurevych. 2018. Bayesian Ensembles of Crowds and Deep Learners for Sequence Tagging. arXiv:1811.00780. Maria Skeppstedt, Carita Paradis, and Andreas Kerren. 2016. PAL, a tool for Preannotation and Active Learning. Journal for Language Technology and Computational Linguistics, 31(1):81–100. Brett R. South, Danielle Mowery, Ying Suo, Jianwei Leng, scar Ferrndez, Stephane M. Meystre, and Wendy W. Chapman. 2014. Evaluating the effects of machine pre-annotation and an interactive annotation interface on manual de-identification of clinical text. Journal of Biomedical Informatics: Special Issue on Informatics Methods in Medical Privacy, 50:162–172. 2772 Morgan Ulinski, Julia Hirschberg, and Owen Rambow. 2016. Incrementally learning a dependency parser to support language documentation in field linguistics. In Proceedings of the 26th International Conference on Computational Linguistics (COLING), pages 440–449, Osaka, Japan. Rob Voigt, Nicholas P. Camp, Vinodkumar Prabhakaran, William L. Hamilton, Rebecca C. Hetey, Camilla M. Griffiths, David Jurgens, Dan Jurafsky, and Jennifer L. Eberhardt. 2017. Language from police body camera footage shows racial disparities in officer respect. Proceedings of the National Academy of Sciences, 114(25):6521–6526. Seid Muhie Yimam, Chris Biemann, Richard Eckart de Castilho, and Iryna Gurevych. 2014. Automatic annotation suggestions and custom annotation layers in webanno. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL): System Demonstrations, pages 91–96, Baltimore, MD, USA.
2019
265
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2773–2785 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2773 Deep Dominance - How to Properly Compare Deep Neural Models Rotem Dror Segev Shlomov Faculty of Industrial Engineering and Management, Technion, IIT rtmdrr|segevs|[email protected] Roi Reichart Abstract Comparing between Deep Neural Network (DNN) models based on their performance on unseen data is crucial for the progress of the NLP field. However, these models have a large number of hyper-parameters and, being non-convex, their convergence point depends on the random values chosen at initialization and during training. Proper DNN comparison hence requires a comparison between their empirical score distributions on unseen data, rather than between single evaluation scores as is standard for more simple, convex models. In this paper, we propose to adapt to this problem a recently proposed test for the Almost Stochastic Dominance relation between two distributions. We define the criteria for a high quality comparison method between DNNs, and show, both theoretically and through analysis of extensive experimental results with leading DNN models for sequence tagging tasks, that the proposed test meets all criteria while previously proposed methods fail to do so. We hope the test we propose here will set a new working practice in the NLP community.1 1 Introduction A large portion of the research activity in Natural Language Processing (NLP) is devoted to the development of new algorithms for existing or new tasks. To evaluate the quality of a new method, its performance on unseen datasets is compared to the performance of existing methods. The progress of the field hence crucially depends on our ability to draw conclusions from such comparisons. In the past, most supervised NLP models have been linear (or log-linear), convex and relatively simple (e.g. (Toutanova et al., 2003; Finkel et al., 2008; Ritter et al., 2011)). Hence, their training 1Our code is available at: https://github.com/ rtmdrr/deepComparison was deterministic and the number of configurations a model could have was rather small – decisions about model design were usually limited to feature selection and the selection of one of a few loss functions. Consequently, when one model performed better than another on unseen data it was safe to argue that the winning model was generally better, especially when the results were statistically significant (Dror et al., 2018), and when the effect of multiple hypothesis testing was taken into account in cases of evaluation with multiple datasets (Dror et al., 2017). With the recent emergence of Deep Neural Networks (DNNs), data-driven performance comparison has become much more complicated. While models such as LSTM (Hochreiter and Schmidhuber, 1997), Bi-LSTM (Schuster and Paliwal, 1997) and the transformer (Vaswani et al., 2017) improved the state-of-the-art in many NLP tasks (e.g. (Dozat and Manning, 2017; Hershcovich et al., 2017; Yadav and Bethard, 2018)), it is much more difficult to compare the performance of algorithms that are based on these models. This is because the loss functions of these models are non-convex (Dauphin et al., 2014), making the solution to which they converge (a local minimum or a saddle point) sensitive to random weight initialization and the order of training examples. Moreover, as these complex models are not fully understood, their training is often enhanced by heuristics such as random dropouts (Srivastava et al., 2014) that introduces another level of non-determinism to the training process. Finally, the increased model complexity results in a much larger number of configurations, governed by a large space of hyper-parameters for model properties such as the number of layers and the number of neurons in each layer. With so many degrees of freedom governed by random and arbitrary values, when comparing two 2774 DNNs it is not possible to consider a single test-set evaluation score for each model. If we do that, we might compare just the best models that someone happened to train rather than the methods themselves. Instead, it is necessary to compare between the score distributions generated by different runs of each of the models. Unfortunately, this comparison task, which is fundamental to the progress of the field, has not received a systematic treatment thus far. Our goal is to close this gap and propose a simple and effective comparison tool between two DNNs based on their test set score distributions. Particularly, we make four contributions: Defining a DNN comparison framework: We define three criteria that a DNN comparison tool should meet: (a) Since we observe only a sample from the population score distribution of each model, the decision should be significant under well justified statistical assumptions. This assures that future runs of the superior model are likely to get higher scores than future runs of the inferior model; (b) The decision mechanism should be powerful, being able to make decisions in most possible decision tasks; and, finally, (c) Since both models depend on random decisions, it is likely that none of them is promised to be superior over the other in all cases (e.g. with all possible random seeds). A powerful comparison tool should hence augment its decision with a confidence score, reflecting the probability that the superior model will indeed produce a better output. Analysis of existing solutions (§ 3, 5): The comparison problem we address has been highlighted by Reimers and Gurevych (2017b, 2018), who established its importance in an extensive experimentation with neural sequence models (Reimers and Gurevych, 2017a), and proposed two main solutions (§3). One solution, which we refer to as the collection of statistics (COS) solution, is based on the analysis of statistics of the empirical score distribution of the two algorithms – such as their mean, median and standard deviation (std), as well as their minimum and maximum values. Unfortunately, this solution does not respect criterion (a) as it does not deal with significance, and as we demonstrate in §5 its power (criterion (b)) is also limited. Their second solution is based on significance testing for Stochastic Order (SO) (Lehmann, 1955), a strict criterion that is hardly met in reality. While this solution respects criterion (a), it is not designed to deal with criterion (c), since it does not provide information beyond its decision if one of the distributions is stochastically dominant over the other or not, and as we show in §5 its power (criterion (b)) is very limited. A new comparison tool (§ 4): We propose a solution that meets our three criteria. Particularly, we adapt to our problem the recently presented concept of Almost Stochastic Order (ASO) between two distributions ( ´Alvarez-Esteban et al., 2017),2 and the statistical significance test for this property, which makes very modest assumptions about the participating distributions (criterion (a)). Further, in line with criterion (c), the test returns a variable ϵ ∈[0, 1], that quantifies the degree to which one algorithm is stochastically larger than the other, with ϵ = 0 reflecting stochastic order. We further show that the test is designed to be very powerful (criterion (b)), which is possible because the decision on the superior algorithm is complemented by the confidence score. Extensive experimental analysis (§ 5): We revisit the extensive experimental setup of Reimers and Gurevych (2017a,b), who performed 510 comparisons between strong DNN-based sequence tagging models. In each of their experiments they compared two models – either different models or two variants of the same model differing in some of their hyper-parameters – and reported the score distributions of each model across various random seeds and hyper-parameter configurations. Our analysis reveals that while our test can declare one of the algorithms superior in 100% of the cases, the COS approach can do that in 49.01% of the cases, and the SO approach in a mere 0.98%. In addition to being powerful, the decisions and the confidence scores of our proposed test are also well aligned with the tests proposed in previous literature: when the previous methods are challenged, our method still makes a decision but it also indicates the smaller gap between the algorithms. We hope that this work will establish a standard for the comparison between DNNs. 2 Performance Variance in DNNs In this section we discuss the source of nondeterminism in DNNs, focusing on hyperparameter configurations and random choices. Hyper-parameter Configurations DNNs are complex models governed by a variety of hyper2We use the terms Almost Stochastic Order and Almost Stochastic Dominance interchangeably in this paper. 2775 parameters. A formal definition of a hyperparameter, differentiating it from a standard parameter, is usually a parameter whose value is set before the learning process begins. We can roughly say that hyper-parameters determine the structure of the model and particular algorithmic decisions related, e.g., to its optimization. Some popular structure-related hyper-parameters in the DNN literature are the number of layers, layer sizes, activation functions, loss functions, window sizes, stride values, and parameter initialization methods. Some optimization (training) related hyper-parameters are the optimization algorithms, learning rates, number of epochs, momentum, mini-batch sizes, whether or not to use optimization heuristics such as gradient clipping and gradient normalization, and sampling and ordering methods of the training data. To decide on the hyper-parameter values, it is standard to explore several configurations and observe which performs best on an unseen, held-out dataset, commonly referred to as the development set. For some hyper-parameters (e.g. the learning rate and the optimization algorithm) the range of feasible values reflects the intuitions of the model author, and the tuned value provides some insight about the model and the data. However, for many other hyper-parameters (e.g. the number of neurons in each layer of the model and the number of epochs of the optimization algorithm) the range of values and the selected values are quite arbitrary. Hence, although hyper-parameters can be tuned on development data, the distribution of model’s evaluation scores across these configurations is of interest, especially when considering the hyperparameters with the more arbitrary values. Random Choices There are also hyperparameters that do not follow the above tuning logic. These include some of the hyper-parameters that govern the random ordering of the training examples, the dropout process and the initialization of the model parameters. The values of these hyper-parameters are often set randomly. In other cases, randomization is introduced to the model without an explicit hyper-parameter. For example, a popular initialization method for DNN weights is the Xavier method (Glorot and Bengio, 2010). In this method, the initial weights are sampled from a Gaussian distribution with a mean of 0 and an std of p 2/ni, where ni is the number of input units of the i-th layer. As discussed in §1, being non-convex, the convergent point of DNNs is deeply affected by these random effects. Unfortunately, exploring all possible random seeds is impossible both because they form an uncountable set and because their values are uninterpretable and it is hence even hard to decide on the relevant search space for their values. This dictates the need for reporting model results with multiple random choices. 3 Comparing DNNs: Problem Formulation and Background Problem Definition Given two algorithms, each associated with a set of test-set evaluation scores, our goal is to determine which algorithm, if any, is superior. In this research, the score distributions are generated when running two different DNNs with various hyper-parameter configurations and random seeds. For both DNNs, the performance is measured using the same evaluation measure over the same dataset,3 but, to be as general as possible, the number of scores may vary between the DNNs. As noted in §1, several methods were proposed for the comparison between the score distributions of two DNNs. We now discuss these methods. 3.1 Collection of Statistics (COS) This approach is based on the analysis of statistics of the empirical score distributions. For example, Reimers and Gurevych (2018) averaged the testset scores and applied the Welch’s t-test (Welch, 1947) for comparing between the means. Notice that the Welch’s t-test is based on the assumption that the test-set scores are drawn from normal distributions – an assumption that has not been validated for DNN score distributions. Hence, this method does not meet criterion (a) from §1, that requires the comparison method to check for statistical significance under realistic assumptions. Moreover, comparing only the mean of two distributions is not always sufficient for making predictions about future comparisons between the algorithms. Other statistics such as the std, median and the minimum and maximum values are often also relevant. For example, it might be that the expected value of algorithm A is indeed larger than that of algorithm B, but A’s std is also much larger, making prediction very challenging. In §5 we show that if both larger mean and smaller std 3Without loss of generality we will assume that higher values of the measure are better. 2776 is required for a decision, the COS approach is decisive (i.e. it can declare that one algorithm is better than the other) in only 49.01% of the 510 setups considered in Reimers and Gurevych (2017b). This violates our criterion (b) which requires the comparison test to be powerful. 3.2 Stochastic Order (SO) Another approach, proposed by Reimers and Gurevych (2018), tests whether a score drawn from the distribution of algorithm A (denoted as XA) is likely, with a probability higher than 0.5, to be larger than a score drawn from the distribution of algorithm B (XB). Put it formally, algorithm A is declared superior to algorithm B if: P(XA ≥XB) > 0.5. (1) To test if this requirement holds based on the empirical score distributions of the two algorithms, the authors applied the Mann-Whitney U test for independent pairs (Mann and Whitney, 1947) – which tests whether there exists a stochastic order (SO) between two random variables. This test is non-parametric, making no assumptions about the participating distributions except for being continuous. In the appendix we show that if there is an SO between two distributions, the condition in Equation 1 also holds. We next describe the concept of SO in more details. But first, in order to keep our paper selfcontained, we define the cumulative distribution function (CDF) and the empirical CDF of a probability distribution. The CDF For a random variable X, the CDF is defined as follows: F(t) = P(X ≤t). For a sample {x1, .., xn}, the empirical CDF is defined as follows: Fn(t) = 1 n n X i=1 1xi≤t = number of xis ≤t n , where 1xi≤t is an indicator function that takes the value of 1 if xi ≤t, and 0 otherwise. These definitions are required for the definition of SO we make next. Stochastic Order (SO) Lehmann (1955) defines a random variable X to be stochastically larger than a random variable Y (denoted by X ⪰ Y ) if F(a) ≤G(a) for all a (with a strict inequality for some values of a), where F and G are the CDFs of X and Y , respectively. That is, if we observe a random value sampled from the first distribution, it is likely to be larger than a random value sampled from the second distribution. If it can be concluded from the empirical score distributions of two DNNs that SO exists between their respective population distributions, this means that one algorithm is more likely to produce higher quality solutions than the other, and this algorithm can be declared superior. As discussed above, Reimers and Gurevych (2018) applied the Mann-Whitney U-test to test for this relationship. The U-test has high statistical power when the tested distributions are moderate-tailed, e.g., the normal distribution or the logistic distribution. When the distribution is heavy tailed, e.g., the Cauchy distribution, there are several alternative statistical tests that have higher statistical power, for example likelihood based tests (Lee and Wolfe, 1976; El Barmi and McKeague, 2013). The main limitation of this approach is that SO can rarely be proved to hold based on two empirical distributions. Indeed, in §5 we show that an SO holds between the two compared algorithms only in 0.98% of the comparisons performed by Reimers and Gurevych (2017a). Hence, while this approach meets our criterion (a) (testing for significance under realistic assumptions), it does not meet criterion (b) (being a powerful test) and criterion (c) (providing a confidence score). We will next describe another approach that does meet our three criteria. 4 Our Approach: Almost Stochastic Dominance Our starting point is that the requirement of SO is unrealistic because it means that the inequality F(a) ≤G(a) should hold for every value of a. It is likely that this criterion should fail to determine dominance between two distributions even when a ”reasonable” decision-maker would clearly prefer one DNN over the other. We hence propose to employ a relaxed version of this criterion. We next discuss different definitions of such relaxation. A Potential Relaxation For ϵ > 0 and random variables X and Y with CDFs F and G, respectively, we can define the following notion of ϵstochastic dominance: 2777 X ⪰ϵ Y if F(a) ≤G(a) + ϵ for all a. That is, we allow the distributions to violate the stochastic order, and hence one CDF does not have to be strictly below the other for all a. The practical shortcomings of this definition are apparent in cases where F(a) is greater than G(a) for all a, with a gap bounded by, for example, ϵ/2. In such cases we would not want to determine that X ∼F is ϵ stochastically dominant over Y ∼ G because its CDF is strictly above the CDF of Y , and hence Y is stochastically larger than X. However, according to this relaxation, X ∼F is indeed ϵ stochastically larger than Y ∼G. Almost Stochastic Dominance To overcome the limitations of the above straight forward approach, and define a relaxation of stochastic order, we turn to a definition that is based on the proportion of points in the domain of the participating distributions for which SO holds. That is, the test we will introduce below is based on the following two violation sets: VX = {a : F(a) > G(a)}. VY = {a : F(a) < G(a)}. Intuitively, the variable with the smaller violation set should be declared superior and the ratio between these sets should define the gap between the distributions. To implement this idea, del Barrio et al. (2018) defined the concept of almost stochastic dominance. Here we describe their work, that aims to compare two distributions, and discuss its applicability to our problem of comparing two DNN models based on the three criteria defined in §1. We start with a definition: for a CDF F, the quantile function associated with F is defined as: F −1(t) = inf{x : t ≤F(x)}, t ∈(0, 1). (2) It is possible to define stochastic order using the quantile function in the following manner: X ⪰Y ⇐⇒F −1(t) ≥G−1(t), ∀t ∈(0, 1). (3) The advantage of this definition is that the domain of the quantile function is bounded between 0 and 1. This is in contrast to the CDF whose domain is unbounded. From this definition, it is clear that a violation of the stochastic order between X and Y occurs when F −1(t) < G−1(t). Hence, it is easy to redefine VX and VY based on the quantile functions: AX = {t ∈(0, 1) : F −1(t) < G−1(t)}. AY = {t ∈(0, 1) : F −1(t) > G−1(t)}. del Barrio et al. (2018) employed these definitions in order to define the distance of each random variable from stochastic dominance over the other: εW2(F, G) := R AX(F −1(t) −G−1(t))2dt W2(F, G) 2 . (4) Where W2(F, G), also known as the univariate L2-Wasserstein distance between distributions, is defined as: W2(F, G) = sZ 1 0 (F −1(t) −G−1(t))2dt. (5) This ratio explicitly measures the distance of X (with CDF F) from stochastic dominance over Y (with CDF G) since it reflects the probability mass for which Y dominates X. The corresponding definition for the distance of Y from being stochastically dominant over X can be received from the above equations by replacing the roles of F and G and integrating over AY instead of AX. This index satisfies 0 ≤εW2(F, G) ≤1 where 0 corresponds to perfect stochastic dominance of X over Y and 1 corresponds to perfect stochastic dominance of Y over X. It also holds that εW2(F, G) = 1 −εW2(G, F), and smaller values of the smaller index (which is by definition bounded between 0 and 0.5) indicate a smaller distance from stochastic dominance. Statistical Significance Testing for ASO Using this index it is possible to formulate the following hypothesis testing problem to test for almost stochastic dominance: H0 : εW2(F, G) ≥ϵ H1 : εW2(F, G) < ϵ which tests, for a predefined ϵ > 0, if the violation index is smaller than ϵ. Rejecting the null hypothesis means that the first score distribution F is almost stochastically larger than G with ϵ distance from stochastic order. del Barrio et al. (2018) proved that without further assumptions, H0 will be rejected with a significance level of α if: r nm n + m εW2(Fn, Gm) −ϵ  < ˆσn,mΦ−1(α), 2778 where Fn, Gm are the empirical CDFs with n and m samples, respectively, ϵ is the violation level, Φ−1 is the inverse CDF of a normal distribution and ˆσn,m is the estimated variance of the value r nm n + m εW2(F ∗ n, G∗ m) −εW2(Fn, Gm)  , where εW2(F ∗ n, G∗ m) is computed using samples X∗ n, Y ∗ m from the empirical distributions Fn and Gm.4 In addition, the minimal ϵ for which we can claim, with a confidence level of 1 −α, that F is almost stochastically dominant over G is ϵmin(Fn, Gm, α) = εW2(Fn, Gm) − q n+m nm ˆσn,mΦ−1(α). If ϵmin(Fn, Gm, α) < 0.5, we can claim that algorithm A is better than B, and the lower ϵmin(Fn, Gm, α) is the greater is the gap between the algorithms. When ϵmin(Fn, Gm, α) = 0, algorithm A is stochastically dominant over B. However, if ϵmin(Fn, Gm, α) ≥0.5, then F is not almost stochastically larger than G (with confidence level 1−α) and hence we should accept the null hypothesis that algorithm A is not superior to algorithm B. del Barrio et al. (2018) proved that, assuming accurate estimation of ˆσn,m, it holds that: ϵmin(Fn, Gm, α) = 1 −ϵmin(Gm, Fn, α). Hence, for a given α value, one of the algorithms will be declared superior, unless ϵmin(Fn, Gm, α) = ϵmin(Gm, Fn, α) = 0.5. Notice that the minimal ϵ and the rejection condition of the null hypothesis depend on n and m, the number of scores we have for each algorithm. Hence, for the statistical test to have high statistical power we need to make sure that n and m are big enough. While we cannot provide a method for tuning these numbers, we note that in the extensive analysis of §5 the test had enough statistical power to make decisions in all cases. The pseudo code of our implementation is provided in the appendix. To summarize, the test for almost stochastic dominance meets the three criteria defined in §1. This is a test for statistical significance under very minimal assumptions on the distribution from 4The more samples, the better. In our implementation we employ the inverse transform sampling method to generate samples. which the performance scores are drawn (criterion (a)). Moreover, it quantifies the gap between the two reference distributions (criterion (c)), which allows it to make decisions even in comparisons where the gap between the superior algorithm and the inferior algorithm is not large (criterion (b)). To demonstrate the appropriateness of this method for the comparison between two DNNs we next revisit the extensive experimental setup of Reimers and Gurevych (2017a). 5 Analysis Tasks and Models In this section we demonstrate the potential impact of testing for almost stochastic dominance on the way empirical results of NLP models are analyzed. We use the data of Reimers and Gurevych (2017a)5 and Reimers and Gurevych (2017b).6 This data contains 510 comparison setups for five common NLP sequence tagging tasks: Part Of Speech (POS) tagging with the WSJ corpus (Marcus et al., 1993), syntactic chucking with the CoNLL 2000 data (Sang and Buchholz, 2000), Named Entity Recognition with the CoNLL 2003 data (Sang and De Meulder, 2003), Entity Recognition with the ACE2005 data (Walker et al., 2006), and event detection with the TempEval3 data (UzZaman et al., 2013). In each setup two leading DNNs, either different architectures or variants of the same model but with different hyper-parameter configurations, are compared across various choices of random seeds and hyperparameter configurations. The exact details of the comparisons are beyond the scope of this paper; they are documented in the above papers. For each experimental setup, we report the outcome of three alternative comparison methods: collection of statistics (COS), stochastic order (SO), and almost stochastic order (ASO). For COS, we report the mean, std, and median of the scores for each algorithm, as well as their minimum and maximum values. We consider one algorithm to be superior over another only if both its mean is greater and its std is smaller. For SO, we employ the U-test as proposed by Reimers and Gurevych (2018), and consider a result significant if p ≤0.05 . Finally, for ASO we employ the method of §4 and report the identity of the superior algorithm along with its ϵ value, using p ≤0.01. 5https://github.com/UKPLab/ emnlp2017-bilstm-cnn-crf 6Which was generously given to us by the authors. 2779 Analysis Structure We divide our analysis into three cases. In Case A both the COS and the SO approaches indicate that one of the models is superior. In Case B, the previous methods reach contradicting conclusions: while COS indicates that one of the algorithms is superior, SO comes insignificant. Finally, in Case C both COS and SO are indecisive. In the 510 comparisons we analyze there is no setup where SO was significant but COS could not reach a decision. We start with an example setup for each case and then provide a summary of all 510 comparisons. Results: Case A We demonstrate that if algorithm A is stochastically larger than algorithm B then all three methods agree that algorithm A is better than B. As an example setup we analyze the comparison between the NER models of Lample et al. (2016) and Ma and Hovy (2016) when running both algorithms multiple times, changing only the random seed fed into the random number generator (41 scores from (Lample et al., 2016), 87 scores from (Ma and Hovy, 2016)). The evaluation measure is F1 score. The collection of statistics for the two models is presented in Table 1. Lample et al. Ma&Hovy Mean 0.9075 0.9056 STD 0.2237 0.3211 Median 0.9080 0.9063 Min 0.9018 0.8853 Max 0.9113 0.9100 Table 1: NER results. (Case A). The U-test states that (Lample et al., 2016) is stochastically larger than (Ma and Hovy, 2016) with a p-value of 0.00025. This result is also consistent with the prediction of the COS approach as (Lample et al., 2016) is better than (Ma and Hovy, 2016) both in terms of mean (larger) and std (smaller). Finally, the minimum ϵ value of the ASO method is 0, which also reflects an SO. Results: Case B We demonstrate that if the measures of mean and std from the COS approach indicate that algorithm A is better than algorithm B but stochastic dominance does not hold, then it also holds that A is almost stochastically larger than B with a small ϵ > 0. As an example case we consider the experiment where the performance of a BiLSTM POS tagger with one of two optimizers, Adam (Kingma and Ba, 2014) (3898 scores) or RMSProp (Hinton et al., 2012) (1822 scores), are compared across various hyper-parameter configurations and random seeds. The evaluation measure is word level accuracy. The COS for the two models is presented in Table 2. Adam RMSprop Average 0.9224 0.9190 STD 0.0604 0.0920 Median 0.9319 0.9349 Min 0.1746 0.1420 Max 0.9556 0.9573 Table 2: POS tagging results (Case B). The result of the U-test came insignificant with p-value of 0.4562. The COS approach predicts that Adam is the better optimizer as both its mean is larger and its std is smaller. When comparing between Adam and RMSProrp, the ASO method returns an ϵ of 0.0159, indicating that the former is almost stochastically larger than the latter. We note that decisions with the COS method are challenging as it potentially involves a large number of statistics (five in this analysis). Our decision here is to make the COS prediction based on the mean and std of the score distribution, even when according to other statistics the conclusion might have been different. We consider this ambiguity an inherent limitation of the COS method. Results: Case C Finally, we address the case where stochastic dominance does not hold and no conclusions can be drawn from the statistics collection. Our observation is that even in these cases ASO is able to determine which algorithm is better with a reasonable level of confidence. We consider again a BiLSTM architecture, this time for NER, where the comparison is between two dropout policies – no dropout (225 scores) and variational dropout (2599 scores). The evaluation measure is the F1 score and the collection of statistics is presented in Table 3. Variational No Dropout Mean 0.8850 0.8772 STD 0.0392 0.0247 Median 0.8896 0.8799 Min 0.0119 0.5547 Max 0.9098 0.8995 Table 3: NER Results (Case C). 2780 (a) Case A (b) Case B (c) Case C Figure 1: An histogram of ϵ values of the ASO method for cases A, B and C. The U-test came insignificant with a p-value of 0.5. COS is also inconclusive as the mean result of the variational dropout approach is larger, but so also its std. In this case, looking at the other statistics also gives a mixed picture as the median and max values of the variational approach are larger, but its min value is substantially smaller. The ASO approach indicates that the no dropout approach is almost stochastically larger, with ϵ = 0.0279. An in-depth consideration supports this decision as the much larger std and the much smaller minimum of the variational approach are indicators of a skewed score distribution that leaves low certainty about future performance. Results: Summary We now turn to a summary of our analysis across the 510 comparisons of Reimers and Gurevych (2017a). Table 4 presents the percentage of comparisons that fall into each category, along with the average and std of the ϵ value of ASO for each case (all ASO results are significant with p ≤0.01). Figure 1 presents the histogram of these ϵ values in each case. % of comparisons Avg. ϵ ϵ std Case A 0.98% 0.0 0.0 Case B 48.04% 0.072 0.108 Case C 50.98% 0.202 0.143 Table 4: Results summary over the 510 comparisons of Reimers and Gurevych (2017a). The number of comparisons that fall into case A is only 0.98%, indicating that it is rare that a decision about stochastic dominance of one algorithm can be reached when comparing DNNs. We consider this a strong indication that the Mann Whitney U test is not suitable for DNN comparison as it has very little statistical power (criterion (b)). COS makes a decision in 49.01% of the comparisons (case A and B). This method is also somewhat powerful (criterion (b)), but much less so than ASO that is decisive in all 510 comparisons. The ϵ values of ASO are higher for case B than for case A (middle line of the table, middle graph of the figure). For case C the ϵ distribution is qualitatively different – ϵ receives a range of values (rightmost graph of the figure) and its average is 0.202 (bottom line of the table). We consider this to be a desired behavior as the more complex the picture drawn by COS and SO is, the less confident we expect ASO to be. Being able to make a decision in all 510 comparisons while quantifying the gap between the distributions, we believe that ASO is an appropriate tool for DNN comparison. 6 Error Rate Analysis While our extensive analysis indicates the quality of the ASO test, it does not allow us to estimate its false positive and false negative rates. This is because in our 510 comparisons there is no oracle (or gold standard) that says if one of the algorithms is superior. Below we provide such analyses. False Positive Rate The ASO test is defined such that the ε value required for rejecting the conclusion that algorithm A is better than B is defined by the practitioner. While ε = 0.5 indicates a clear rejection, most researchers would probably set a lower ε threshold. Our goal in the next analysis is to present a case where the false positive rate of ASO is very low, even when one refrains from declaring one algorithm as better than the other only when ε is very close to 0.5. To do that, we consider a scenario where each of the 255 score distributions of the experiments in § 5 is compared to a variant of the same distribution after a Gaussian noise with a 0 expectation and a standard deviation of 0.001 is added to 2781 (a) False Positive Rate Experiment (b) False Negative Rate Experiment Figure 2: Histograms of the ϵ values of the ASO test in the ablation experiments. each of the scores. Since in all the tasks we consider the scores are in the [0, 1] range, the value of 0.001 is equivalent to 0.1%. Since the average of the standard deviations of these 255 score distributions is 0.06, our noise is small but not negligible. We choose this relatively small symmetric noise so that with a high probability the original score distribution and the modified one should not be considered different. We run 100 comparisons for each of the 255 algorithms. We compute the ε such that a value of 0 means that the non-noisy version is better than the noisy one with the strongest confidence, while the value of 1 means the exact opposite (both values are not observed in practice). A value of 0.5 indicates that no algorithm is superior – the correct prediction. Figure 2 (a) presents a histogram of the ε values. The averaged ε is 0.502 with a standard deviation of 0.0472, and 95% of the ε values are in [0.396, 0.631]. This means that if we set a threshold of 0.4 on ε (i.e. lower than 0.4 or higher than 0.6), the false positive rate would be lower than 5%. In comparison, the COS approach declares the noisy version superior in 26.2% of the 255 comparisons, and the non-noisy version in 23.8%: a false positive rate of 50%.7 The SO test makes no mistakes, as a false positive of this test is equivalent to an ε value of 0 or 1 for ASO. Finally, we also considered a setup where for each of the 255 algorithms the performance score set was randomly split into two equal sized sets. We repeated this process 100 times for each algorithm, using ASO to compare between the sets. In all cases we observed an averaged ε of 0.5, indicating that the method avoids false positive predictions when an algorithm is compared to itself. 7Recall that we consider one algorithm superior over the other according to COS when both the mean of its scores is larger than the mean of the other, and its std is smaller. False Negative Rate This analysis complements the previous one by demonstrating the low false negative rate of ASO in a case where it is clear that one distribution is better than the other. For each of the 255 score distributions we generate a noisy distribution by randomly splitting the scores into a set A of 1 4 of the scores and the complementary set ˆA of the rest of the scores. For each score s we sample a noise parameter φ from a Gaussian with a 0 expectation and an std of 0.01, adding to s the value of (−1) · φ2 if s ∈A, and φ2 if s ∈ˆA. The noisy distribution is superior to the original one, with a high probability. As before we perform 100 comparisons for each of the 255 algorithms. We compute ε such that a value of 0 would mean that the noisy version is superior. The ε values are plotted in Figure 2 (b): their average is 0.134, standard deviation is 0.07 and more than 99% of the values are lower than 0.4 (the same threshold as in the first experiment). The COS test deems the noisy distribution superior in 87.4% of the cases, while in the rest it considers none of the distributions superior. SO has a false negative rate of 100% as ε > 0 in all experiments. 7 Conclusions We considered the comparison of two DNNs based on their test-set score distribution. We defined three criteria for a high quality comparison method, demonstrated that previous methods do not meet these criteria and proposed to use the recently proposed test for almost stochastic dominance that does meet these criteria. We analyzed the extensive experimental setup of Reimers and Gurevych (2017a) and demonstrated the effectiveness of our proposed test. Having released our code, we hope this will become a new evaluation standard in the NLP community. 2782 References PC ´Alvarez-Esteban, Eustasio del Barrio, Juan Antonio Cuesta-Albertos, C Matr´an, et al. 2017. Models for the assessment of treatment improvement: the ideal and the feasible. Statistical Science, 32(3):469–485. Eustasio del Barrio, Juan A Cuesta-Albertos, and Carlos Matr´an. 2018. An optimal transportation approach for assessing almost stochastic order. In The Mathematics of the Uncertain, pages 33–44. Springer. Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. 2014. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pages 2933–2941. Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proc. of ICLR. Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability analysis for natural language processing: Testing significance with multiple datasets. Transactions of the Association for Computational Linguistics, 5:471–486. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhikers guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1383–1392. Hammou El Barmi and Ian W McKeague. 2013. Empirical likelihood-based tests for stochastic ordering. Bernoulli: official journal of the Bernoulli Society for Mathematical Statistics and Probability, 19(1):295. Jenny Rose Finkel, Alex Kleeman, and Christopher D Manning. 2008. Efficient, feature-based, conditional random field parsing. Proceedings of ACL08: HLT, pages 959–967. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1127–1138. Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. 2012. Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Cited on, page 14. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Young Jack Lee and Douglas A Wolfe. 1976. A distribution-free test for stochastic ordering. Journal of the American Statistical Association, 71(355):722–727. Erich Leo Lehmann. 1955. Ordered families of distributions. The Annals of Mathematical Statistics, pages 399–419. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354. Henry B Mann and Donald R Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics, pages 50–60. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Nils Reimers and Iryna Gurevych. 2017a. Optimal hyperparameters for deep lstm-networks for sequence labeling tasks. arXiv preprint arXiv:1707.06799. Nils Reimers and Iryna Gurevych. 2017b. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338–348. Nils Reimers and Iryna Gurevych. 2018. Why comparing single performance scores does not allow to draw conclusions about machine learning approaches. arXiv preprint arXiv:1803.09578. Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the conference on empirical methods in natural language processing, pages 1524–1534. Association for Computational Linguistics. Erik F Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task: Chunking. arXiv preprint cs/0009008. Erik F Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. arXiv preprint cs/0306050. 2783 Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 conference of the North American chapter of the association for computational linguistics on human language technologyvolume 1, pages 173–180. Association for Computational Linguistics. Naushad UzZaman, Hector Llorens, Leon Derczynski, James Allen, Marc Verhagen, and James Pustejovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), volume 2, pages 1–9. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57. Bernard L Welch. 1947. The generalization ofstudent’s’ problem when several different population variances are involved. Biometrika, 34(1/2):28–35. Vikas Yadav and Steven Bethard. 2018. A survey on recent advances in named entity recognition from deep learning models. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2145–2158. 2784 A Proof - Equivalent Definitions of Stochastic Order As discussed in §3, our goal here is to prove that if a random variable X is stochastically larger than a random variable Y (denoted by X ⪰Y ), then it also holds that P(X ≥Y ) > 0.5. This lemma explains why Reimers and Gurevych (2018) employed the Mann-Whitney U test that tests for stochastic order, while their requirement for stating that one algorithm is better than the other was that P(X ≥Y ) > 0.5 (where X is the score distribution of the superior algorithm and Y is the score distribution of the inferior algorithm). Lemma 1. If X ⪰Y then P(X ≥Y ) > 0.5. Proof. For every two continuous random variables X, Y it holds that: P(X ≥Y ) + P(Y > X) = 1. Let us first assume that X and Y are i.i.d and continuous. If this is the case then: P(X ≥Y ) + P(Y > X) = 1 P(X ≥Y ) + P(X > Y ) = 1 2P(X ≥Y ) = 1 P(X ≥Y ) = 0.5. The first pass is true because X and Y are identically distributed and the second pass is true because X and Y are continuous random variables. Assuming that the density functions of the random variables X and Y exist (which is true because they are continues variables), we can write P(X ≥Y ) in the following manner: P(X ≥Y ) = Z ∞ y=−∞ Z ∞ x=y fX(x) · fY (y)dxdy = Z ∞ y=−∞ fY (y) · P(X ≥y)dy = Z ∞ y=−∞ fY (y) · P(Y ≥y)dy = 0.5. Where the equality to 0.5 was proved above. In our case, X ⪰Y . This means that X and Y are independent but are not identically distributed. By definition of stochastic order this also means that P(X ≥a) > P(Y ≥a), for all a with strict inequality for at least one value of a. We get that: P(X ≥Y ) = Z ∞ y=−∞ Z ∞ x=y fX(x) · fY (y)dxdy = Z ∞ y=−∞ fY (y) · P(X ≥y)dy > Z ∞ y=−∞ fY (y) · P(Y ≥y)dy = 0.5. Where the last pass holds because X is stochastically larger than Y . We get that P(X ≥Y ) > 0.5. Note that the opposite direction does not always hold, i.e., it is easy to come up with an example where P(X ≥Y ) > 0.5 but there is no stochastic order between the two random variables. However, the opposite direction is true with an additional assumption that the CDFs do not cross one another (which we do not prove here). B Hypothesis Testing for Almost Stochastic Dominance In this section we discuss the implementation of the algorithm for hypothesis testing for the almost stochastic dominance relation between two random variables (empirical score distributions). The code of the algorithm is publicly available. We are given two sets of scores from two algorithms, n scores from algorithm A and m scores from algorithm B: A = {x1, x2, ..., xn}, B = {y1, y2, ..., ym}. The pseudocode of the algorithm is as follows: 1. Sort the data points from the smallest to the largest in both sets, creating two lists: A = [x(1), ..., x(n)] and B = [y(1), ..., y(m)], where x(i) is the i-th smallest value. 2. Build the empirical score distributions Fn, Gm using the following formula: Fn(t) = 1 n n X i=1 1x(i) ≤t = number of xis ≤t n 3. Build the empirical inverse score distributions F −1(t), G−1(t) using the following formula:8 F −1(t) = inf{x : t ≤F(x)}, t ∈(0, 1) 8It is possible to compute the inverse CDF without explicitly computing the CDF. 2785 4. Compute the index of stochastic dominance violation εW2(F, G) (equation 4 of the main paper). In practice we compute the integral operation using the definition of the Riemann integral. That is, when computing R 1 0 f(t)dt we partition the interval between 0 and 1 into small parts of size ∆and compute the sum of the function value in this part times ∆). 5. Estimate σ: take many samples X∗ n,Y ∗ m from the empirical distributions Fn and Gm; for each of those samples compute the expression: r nm n + m εW2(F ∗ n, G∗ m) −εW2(Fn, Gm)  and use the variance of those values as the estimate for σ2, take the square root of that estimator for ˆσn,m. The more samples, the better. In our implementation we employ the inverse transform sampling method to generate samples. 6. The minimal ϵ for which we can claim that algorithm A is almost stochastically larger than algorithm B with confidence level of 1−α is: ϵmin(Fn, Gm, α) = εW2(Fn, Gm) − q n+m nm ˆσn,mΦ−1(α).
2019
266
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786–2791 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2786 We need to talk about standard splits Kyle Gorman City University of New York [email protected] Steven Bedrick Oregon Health & Science University [email protected] Abstract It is standard practice in speech & language technology to rank systems according to performance on a test set held out for evaluation. However, few researchers apply statistical tests to determine whether differences in performance are likely to arise by chance, and few examine the stability of system ranking across multiple training-testing splits. We conduct replication and reproduction experiments with nine part-of-speech taggers published between 2000 and 2018, each of which reports state-of-the-art performance on a widely-used “standard split”. We fail to reliably reproduce some rankings using randomly generated splits. We suggest that randomly generated splits should be used in system comparison. 1 Introduction Evaluation with a held-out test set is one of the few methodological practices shared across nearly all areas of speech and language processing. In this study we argue that one common instantiation of this procedure—evaluation with a standard split— is insufficient for system comparison, and propose an alternative based on multiple random splits. Standard split evaluation can be formalized as follows. Let G be a set of ground truth data, partitioned into a training set Gtrain, a development set Gdev and a test (evaluation) set Gtest. Let S be a system with arbitrary parameters and hyperparameters, and let M be an evaluation metric. Without loss of generality, we assume that M is a function with domain G × S and that higher values of M indicate better performance. Furthermore, we assume a supervised training scenario in which the free parameters of S are set so as to maximize M(Gtrain, S), optionally tuning hyperparameters so as to maximize M(Gdev, S). Then, if S1 and S2 are competing systems so trained, we prefer S1 to S2 if and only if M(Gtest, S1) > M(Gtest, S2). 1.1 Hypothesis testing for system comparison One major concern with this procedure is that it treats M(Gtest, S1) and M(Gtest, S2) as exact quantities when they are better seen as estimates of random variables corresponding to true system performance. In fact many widely used evaluation metrics, including accuracy and F-score, have known statistical distributions, allowing hypothesis testing to be used for system comparison. For instance, consider the comparison of two systems S1 and S2 trained and tuned to maximize accuracy. The difference in test accuracy, ˆδ = M(Gtest, S1) −M(Gtest, S2), can be thought of as estimate of some latent variable δ representing the true difference in system performance. While the distribution of ˆδ is not obvious, the probability that there is no population-level difference in system performance (i.e., δ = 0) can be computed indirectly using McNemar’s test (Gillick and Cox, 1989). Let n1>2 be the number of samples in Gtest which S1 correctly classifies but S2 misclassifies, and n2>1 be the number of samples which S1 misclassifies but S2 correctly classifies. When δ = 0, roughly half of the disagreements should favor S1 and the other half should favor S2. Thus, under the null hypothesis, n1>2 ∼Bin(n, .5) where n = n1>2 + n2>1. And, the (one-sided) probability of the null hypothesis is the probability of sampling n1>2 from this distribution. Similar methods can be used for other evaluation metrics, or a reference distribution can be estimated with bootstrap resampling (Efron, 1981). Despite this, few recent studies make use of statistical system comparison. Dror et al. (2018) survey statistical practices in all long papers presented at the 2017 meeting of the Association for Computational Linguistics (ACL), and all articles published in the 2017 volume of the Transactions of the ACL. They find that the majority of these works 2787 do not use appropriate statistical tests for system comparison, and many others do not report which test(s) were used. We hypothesize that the lack of hypothesis testing for system comparison may lead to type I error, the error of rejecting a true null hypothesis. As it is rarely possible to perform the necessary hypothesis tests from published results, we evaluate this risk using a replication experiment. 1.2 Standard vs. random splits Furthermore, we hypothesize that standard split methodology may be insufficient for system evaluation. While evaluations based on standard splits are an entrenched practice in many areas of natural language processing, the static nature of standard splits may lead researchers to unconsciously “overfit” to the vagaries of the training and test sets, producing poor generalization. This tendency may also be amplified by publication bias in the sense of Scargle (2000). The field has chosen to define “state of the art” performance as “the best performance on a standard split”, and few experiments which do not report improvements on a standard split are ultimately published. This effect is likely to be particularly pronounced on highly-saturated tasks for which system performance is near ceiling, as this increases the prior probability of the null hypothesis (i.e., of no difference). We evaluate this risk using a series of reproductions. 1.3 Replication and reproduction In this study we perform a replication and a series of reproductions. These techniques were until recently quite rare in this field, despite the inherently repeatable nature of most natural language processing experiments. Researchers attempting replications or reproductions have reported problems with availability of data (Mieskes, 2017; Wieling et al., 2018) and software (Pedersen, 2008), and various details of implementation (Fokkens et al., 2013; Reimers and Gurevych, 2017; Schluter and Varab, 2018). While we cannot completely avoid these pitfalls, we select a task—English part-ofspeech tagging—for which both data and software are abundantly available. This task has two other important affordances for our purposes. First, it is face-valid, both in the sense that the equivalence classes defined by POS tags reflect genuine linguistic insights and that standard evaluation metrics such as token and sentence accuracy directly measure the underlying construct. Secondly, POS tagging is useful both in zero-shot settings (e.g., Elkahky et al., 2018; Trask et al., 2015) and as a source of features for many downstream tasks, and in both settings, tagging errors are likely to propagate. We release the underlying software under a permissive license.1 2 Materials & Methods 2.1 Data The Wall St. Journal (WSJ) portion of Penn Treebank-3 (LDC99T42; Marcus et al., 1993) is commonly used to evaluate English part-of-speech taggers. In experiment 1, we also use a portion of OntoNotes 5 (LDC2013T19; Weischedel et al., 2011), a substantial subset of the Penn Treebank WSJ data re-annotated for quality assurance. 2.2 Models We attempted to choose a set of taggers claiming state-of-the-art performance at time of publication. We first identified candidate taggers using the “State of the Art” page for part-of-speech tagging on the ACL Wiki.2 We then selected nine taggers for which all needed software and external data was available at time of writing. These taggers are described in more detail below. 2.3 Metrics Our primarily evaluation metric is token accuracy, the percentage of tokens which are correctly tagged with respect to the gold data. We compute 95% Wilson (1927) score confidence intervals for accuracies, and use the two-sided mid-p variant (Fagerland et al., 2013) of McNemar’s test for system comparison. We also report out-of-vocabulary (OOV) accuracy—that is, token accuracy limited to tokens not present in the training data—and sentence accuracy, the percentage of sentences for which there are no tagging errors. 3 Results Table 1 reports statistics for the standard split. The OntoNotes sample is slightly smaller as it omits sentences on financial news, most of which is highly redundant and idiosyncratic. However, the entire OntoNotes sample was tagged by a single experienced annotator, eliminating any annotatorspecific biases in the Penn Treebank (e.g., Ratnaparkhi, 1997, 137f.). 1 http://github.com/kylebgorman/SOTA-taggers 2 http://aclweb.org/aclwiki/State_of_the_art 2788 # Sentences # Tokens Penn Treebank Train. 38,219 912,344 Dev. 5,527 131,768 Test. 5,462 129,654 OntoNotes Train. 28,905 703,955 Dev. 4,051 99,441 Test 4,059 98,277 Table 1: Summary statistics for the standard split. 3.1 Models Three models—SVMTool (Giménez and Màrquez, 2004), MElt (Denis and Sagot, 2009), and Morče/COMPOST (Spoustová et al., 2009)— produced substantial compilation or runtime errors. However, we were able to perform replication with the remaining six models: • TnT (Brants, 2000): a second-order (i.e., trigram) hidden Markov model with a suffixbased heuristic for unknown words, decoded with beam search • Collins (2002) tagger: a linear model, features from Ratnaparkhi (1997), perceptron training with weight averaging, decoded with the Viterbi algorithm3 • LAPOS (Tsuruoka et al., 2011): a linear model, features from Tsuruoka et al. (2009) plus first-order lookahead, perceptron training with weight averaging, decoded locally • Stanford tagger (Manning, 2011): a loglinear bidirectional cyclic dependency network, features from Toutanova et al. (2003) plus distributional similarity features, optimized with OWL-QN, decoded with the Viterbi algorithm • NLP4J (Choi, 2016): a linear model, dynamically induced features, a hinge loss objective optimized with AdaGrad, decoded locally • Flair (Akbik et al., 2018): a bidirectional long short-term memory (LSTM) conditional random fields (CRF) model, contextual string 3We use an implementation by Yarmohammadi (2014). embedding features, a cross-entropy objective optimized with stochastic gradient descent, decoded globally 3.2 Experiment 1: Replication In experiment 1, we adopt the standard split established by Collins (2002): sections 00–18 are used for training, sections 19-21 for development, and sections 22-24 for testing, roughly a 80%-10%10% split. We train and evaluate the six remaining taggers using this standard split. For each tagger, we train on the training set and evaluate on the test set. For taggers which support it, we also perform automated hyperparameter tuning on the development set. Results are shown in Table 2. We obtain exact replications for TnT and LAPOS, and for the remaining four taggers, our results are quite close to previously reported numbers. Token accuracy, OOV accuracy, and sentence accuracy give the same ranking, one consistent with published results. For Penn Treebank, McNemar’s test on token accuracy is significant for all pairwise comparisons at α = .05; for OntoNotes, one comparison is non-significant: LAPOS vs. Stanford (p = .1366). 3.3 Experiment 2: Reproduction We now repeat these analyses across twenty randomly generated 80%–10%–10% splits. After Dror et al. (2017), we use the Bonferroni procedure to control familywise error rate, the probability of falsely rejecting at least one true null hypothesis. This is appropriate insofar as each individual trial (i.e, evaluation on a random split) has a non-trivial statistical dependence on other trials. Table 3 reports the number of random splits, out of twenty, where the McNemar test p-value is significant after the correction for familywise error rate. This provides a coarse estimate of how often the second system would be likely to significantly outperform the first system given a random partition of similar size. Most of these pairwise comparisons are stable across random trials. However, for example, Stanford tagger is not a significant improvement over LAPOS for nearly all random trials, and in some random trials—two for Penn Treebank, fourteen for OntoNotes—it is in fact worse. Recall also that the Stanford tagger was also not significantly better than LAPOS for OntoNotes in experiment 1. Figure 1 shows token accuracies across the two experiments. The last row of the figure gives results for an oracle ensemble which correctly pre2789 Penn Treebank OntoNotes Token OOV Sentence Token Reported Replicated (95% CIs) Replicated Replicated Reproduced TnT .9646 .9646 (.9636, .9656) .8591 .4771 .9622 Collins .9711 .9714 (.9704, .9723) .8789 .5441 .9679 LAPOS .9722 .9722 (.9713, .9731) .8874 .5602 .9709 Stanford .9732 .9735 (.9726, .9744) .9060 .5710 .9714 NLP4J .9764 .9742 (.9733, .9750) .9148 .5756 .9742 Flair .9785 .9774 (.9765, .9782) .9287 .6111 .9790 Table 2: Previously reported, and replicated, accuracies for the standard split of the WSJ portion of Penn Treebank; we also provide token accuracies for a reproduction with the WSJ portion of OntoNotes. PTB ON TnT vs. Collins 20 20 Collins vs. LAPOS 20 7 LAPOS vs. Stanford 1 0 Stanford vs. NLP4J 19 20 NLP4J vs. Flair 20 20 Table 3: The number of random trials (out of twenty) for which the second system has significantly higher token accuracy than the first after Bonferroni correction. PTB, Penn Treebank; ON, OntoNotes. dicts the tag just in case any of the six taggers predicts the correct tag. 3.4 Error analysis From experiment 1, we estimate that the last two decades of POS tagging research has produced a 1.28% absolute reduction in token errors. At the same time, the best tagger is 1.16% below the oracle ensemble. Thus we were interested in disagreements between taggers. We investigate this by treating each of the six taggers as separate coders in a collaborative annotation task. We compute persentence inter-annotator agreement using Krippendorff’s α (Artstein and Poesio, 2008), then manually inspect sentences with the lowest α values, i.e., with the highest rate of disagreement. By far the most common source of disagreement are “headline”-like sentences such as Foreign Bonds. While these sentences are usually quite short, high disagreement is also found for some longer headlines, as in the example sentence in table 4; the effect seems to be due more to capitalization than sentence length. Several taggers lean heavily on capitalization cues to identify proper nouns, and Figure 1: A visualization of Penn Treebank token accuracies in the two experiments. The whiskers shows accuracy and 95% confidence intervals in experiment 1, and shaded region represents the range of accuracies in experiment 2. thus capitalized tokens in headline sentences are frequently misclassified as proper nouns and vice versa, as are sentence-initial capitalized nouns in general. Most other sentences with low α have local syntactic ambiguities. For example, the word lining, acting as a common noun (NN) in the context …a silver for the…, is mislabeled as a gerund (VBG) by two of six taggers. 4 Discussion We draw attention to two distinctions between the replication and reproduction experiments. First, we find that a system judged to be significantly better than another on the basis of performance on the 2790 Chicken Chains Ruffled By Loss of Customers Gold NN NNS VBN IN NN IN NNS TnT NNP NNP NNP IN NN IN NNS Collins NNP NNP NNP IN NNP IN NNS LAPOS NNP NNP NNP NNP NNP IN NNS Stanford NNP NNS VBN IN NN IN NNS NLP4J NNP NNPS NNP IN NNP IN NNS Flair NN NNS VBN IN NN IN NNS Table 4: Example error analysis for a Penn Treebank sentence; α = .521. standard split, does not in outperform that system on re-annotated data or randomly generated splits, suggesting that it is “overfit to the standard split” and does not represent a genuine improvement in performance. Secondly, as can be seen in figure 1, overall performance is slightly higher on the random splits. We posit this to be an effect of randomization at the sentence-level. For example, in the standard split the word asbestos occurs fifteen times in a single training set document, but just once in the test set. Such discrepancies are far less likely to arise in random splits. Diversity of languages, data, and tasks are all highly desirable goals for natural language processing. However, nothing about this demonstration depends on any particularities of the English language, the WSJ data, or the POS tagging task. English is a somewhat challenging language for POS tagging because of its relatively impoverished inflectional morphology and pervasive noun-verb ambiguity (Elkahky et al., 2018). It would not do to use these six taggers for other languages as they are designed for English text and in some cases depend on English-only external resources for feature generation. However, random split experiments could, for instance, be performed for the subtasks of the CoNLL-2018 shared task on multilingual parsing (Zeman et al., 2018). We finally note that repeatedly training the Flair tagger in experiment 2 required substantial grid computing resources and may not be feasible for many researchers at the present time. 5 Conclusions We demonstrate that standard practices in system comparison, and in particular, the use of a single standard split, may result in avoidable Type I error. We suggest that practitioners who wish to firmly establish that a new system is truly state-ofthe-art augment their evaluations with Bonferronicorrected random split hypothesis testing. It is said that statistical praxis is of greatest import in those areas of science least informed by theory. While linguistic theory and statistical learning theory both have much to contribute to part-ofspeech tagging, we still lack a theory of the tagging task rich enough to guide hypothesis formation. In the meantime, we must depend on system comparison, backed by statistical best practices and error analysis, to make forward progress on this task. Acknowledgments We thank Mitch Marcus for valuable discussion of the Wall St. Journal data. Steven Bedrick was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under award number R01DC015999. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embedding for sequence labeling. In COLING, pages 1638–1649. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Thorsten Brants. 2000. TnT: a statistical part-of-speech tagger. In ANLC, pages 224–231. Jinho D. Choi. 2016. Dynamic feature induction: The last gist to the state-of-the-art. In NAACL, pages 271–281. Michael Collins. 2002. Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In EMNLP, pages 1–8. 2791 Pascal Denis and Benoît Sagot. 2009. Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort. In Pacific Asia Conference on Language, Information and Computation, pages 110–119. Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability analysis for natural language processing: testing significance with multiple datasets. Transactions of the Association for Computational Linguistics, 5:471–486. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In ACL, pages 1383–1392. Bradley Efron. 1981. Nonparametric estimates of standard error: the jackknife, the bootstrap and other methods. Biometrika, 68(3):589–599. Ali Elkahky, Kellie Webster, Daniel Andor, and Emily Pitler. 2018. A challenge set and methods for nounverb ambiguity. In EMNLP, pages 2562–2572. Morten W. Fagerland, Stian Lydersen, and Petter Laake. 2013. The McNemar test for binary matched-pairs data: mid-p and asymptotic are better than exact conditional. BMC Medical Research Methodology, 13:91–91. Antske Fokkens, Marieke van Erp, Marten Postma, Ted Pedersen, Piek Vossen, and Nuno Freire. 2013. Offspring from reproduction problems: What replication failure teaches us. In ACL, pages 1691–1701. Larry Gillick and Stephen J. Cox. 1989. Some statistical issues in the comparison of speech recognition algorithms. In ICASSP, pages 23–26. Jesús Giménez and Lluís Màrquez. 2004. SVMTool: A general POS tagger generator based on support vector machines. In LREC, pages 43–46. Christopher D. Manning. 2011. Part-of-speech tagging from 97% to 100%: is it time for some linguistics? In CICLing, pages 171–189. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313–330. Margot Mieskes. 2017. A quantitative study of data in the NLP community. In Workshop on Ethics in NLP, pages 23–29. Ted Pedersen. 2008. Empiricism is not a matter of faith. Computational Linguistics, 34(3):465–470. Adwait Ratnaparkhi. 1997. A maximum entropy model for part-of-speech tagging. In EMNLP, pages 133– 142. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: performance study of LSTM-networks for sequence tagging. In EMNLP, pages 338–348. Jeffrey D. Scargle. 2000. Publication bias: the “filedrawer problem” in scientific inference. Journal of Scientific Exploration, 14(1):91–106. Natalie Schluter and Daniel Varab. 2018. When data permutations are pathological: the case of neural natural language inference. In EMNLP, pages 4935– 4939. Drahomíra Spoustová, Jan Hajič, Jan Raab, and Miroslav Spousta. 2009. Semi-supervised training for the averaged perceptron POS tagger. In EACL, pages 763–771. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In NAACL, pages 173–180. Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec: A fast and accurate method for word sense disambiguation in neural word embeddings. ArXiv preprint arXiv:1511.06388. Yoshimasa Tsuruoka, Yusuke Miyao, and Jun'ichi Kazama. 2011. Learning with lookahead: can history-based models rival globally optimized models? In CoNLL, pages 238–246. Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ananiadou. 2009. Stochastic gradient descent training for L1-regularized log-linear models with cumulative penalty. In ICNLP-AFNLP, pages 477–485. Ralph Weischedel, Eduard Hovy, Mitchell P. Marcus, Martha Palmer, Robert Belvin, Sameer Pradhan, …, and Nianwen Xue. 2011. OntoNotes: a large training corpus for enhanced processing. In Joseph Olive, Caitlin Christianson, and John McCarthy, editors, Handbook of natural language processing and machine translation, pages 54–63. Springer, New York. Martijn Wieling, Josine Rawee, and Gertjan van Noord. 2018. Reproducibility in computational linguistics: are we willing to share? Computational Linguistics, 44(4):641–649. Edwin B. Wilson. 1927. Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association, 22:209–212. Mahsa Yarmohammadi. 2014. Discriminative training with perceptron algorithm for POS tagging task. Technical Report CSLU-2014-001, Center for Spoken Language Understanding, Oregon Health & Science University. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajič, Joakim Nivre, Filip Ginter, …, and Josie Li. 2018. CoNLL 2018 shared task: multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2018 shared task: multilingual parsing from raw text to Universal Dependencies, pages 1– 21.
2019
267
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2792–2798 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2792 Aiming beyond the Obvious: Identifying Non-Obvious Cases in Semantic Similarity Datasets Nicole Peinelt1,2 and Maria Liakata1,2 and Dong Nguyen1,3 1The Alan Turing Institute, London, UK 2University of Warwick, Coventry, UK 3Utrecht University, Utrecht, The Netherlands {n.peinelt, m.liakata}@warwick.ac.uk, [email protected] Abstract Existing datasets for scoring text pairs in terms of semantic similarity contain instances whose resolution differs according to the degree of difficulty. This paper proposes to distinguish obvious from non-obvious text pairs based on superficial lexical overlap and ground-truth labels. We characterise existing datasets in terms of containing difficult cases and find that recently proposed models struggle to capture the non-obvious cases of semantic similarity. We describe metrics that emphasise cases of similarity which require more complex inference and propose that these are used for evaluating systems for semantic similarity. 1 Introduction Modelling semantic similarity between a pair of texts is a fundamental task in NLP with a wide range of applications (Baudiˇs et al., 2016). One area of active research is Community Question Answering (CQA) (Nakov et al., 2017; Bonadiman et al., 2017), which is concerned with the automatic answering of questions based on user generated content from Q&A websites (e.g. StackExchange) and requires modelling the semantic similarity between question and answer pairs. Another well-studied task is paraphrase detection (Socher et al., 2011; He et al., 2015; Tomar et al., 2017), which models the semantic equivalence between a pair of sentences. Evaluation for such tasks has primarily focused on metrics, such as mean average precision (MAP), F1 or accuracy, which give equal weights to all examples, regardless of their difficulty. However, as illustrated by the examples in Table 1, not all items within text pair similarity datasets are equally difficult to resolve. Recent work has shown the need to better understand limitations of current models and datasets in natural language understanding (Wadhwa et al., id case documents 160174 Po what‘s the origin of the word o‘clock? what is the origin of the word o‘clock? 115695 Pn which is the best way to learn coding? how do you learn to program? 193190 No what are the range of careers in biotechnology in indonesia? how do you tenderize beef stew meat? 268368 Nn what is meant by ‘e‘ in mathematics? what is meant by mathematics? Table 1: Examples for difficulty cases from the development set of the Quora dataset. o=obvious, n=nonobvious, N=negative label, P=positive label 2018a; Rajpurkar et al., 2018). For example, Kaushik and Lipton (2018) showed that models sometimes exploit dataset properties to achieve high performance even when crucial task information is withheld, and Gururangan et al. (2018) demonstrated that model performance is inflated by annotation artefacts in natural language inference tasks. In this paper, we analyse current datasets and recently proposed models by focusing on item difficulty based on shallow lexical overlap. Rodrigues et al. (2018) found declarative CQA sentence pairs to be more difficult to resolve than interrogative pairs as the latter contain more cases of superficial overlap. In addition, Wadhwa et al. (2018b) showed that competitive neural reading comprehension models are susceptible to shallow patterns (e.g. lexical overlap). Our study digs deeper into these findings to investigate the properties of current text pair similarity datasets with respect to different levels of difficulty and evaluates models based on how well they can resolve difficult cases. We make the following contributions: 1. We propose a criterion to distinguish between obvious and non-obvious examples in text 2793 pair similarity datasets (section 4). 2. We characterise current datasets in terms of the extent to which they contain obvious vs. non-obvious items (section 4). 3. We propose alternative evaluation metrics based on example difficulty (section 5) and provide a reference implementation at https://github.com/wuningxi/LexSim. 2 Datasets and Tasks We selected well-known benchmark datasets differing in size (small vs. large), document length (single sentence vs. multi-sentence), document types (declarative vs. interrogative) and tasks (answer ranking vs. paraphrase detection vs. similarity scoring), see Table 2. SemEval The SemEval Community Question Answering (CQA) dataset (Nakov et al., 2015, 2016, 2017) contains posts from the online forum Qatar Living. The task is to rank relevant posts above non-relevant ones. Each subtask involves an initial post and 10 possibly relevant posts with binary annotations. Task A contains questions and comments from the same thread, task B involves question paraphrases, and task C is similar to A but contains comments from an external thread. MSRP The Microsoft Research Paraphrase corpus (MSRP) is a popular paraphrase detection dataset, consisting of pairs of sentences with binary judgments (Dolan and Brockett, 2005). Name Task Type Size SemEval (A) answer ranking rank 26K (B) paraphrase ranking rank 4K (C) answer ranking rank 47K Quora paraphrase detection class 404K MSRP paraphrase detection class 5K STS similarity scoring regr 8K Table 2: Selected text pair similarity data sets. Size as number of text pairs. rank=ranking task, class=classification task, regr=regression task. Quora The Quora duplicate questions dataset contains a large number of question pairs with binary labels1. The task is to predict whether two questions are paraphrases, similar to Task B of SemEval, but it is framed as a classification rather than a ranking problem. We use the same training / development / test set partition as Wang et al. (2017). STS The Semantic Textual Similarity Benchmark (STS) dataset (Cer et al., 2017) consists of a selection of STS SemEval shared tasks (2012-2017). It contains sentence pairs annotated with continuous semantic relatedness scores on a scale from 0 (low similarity) to 5 (high similarity). In this paper, we focus on predicting the semantic similarity between two text snippets in a binary classification scenario, as the ranking scenario is only applicable to some of the datasets. Binary labels are already provided for all tasks except for 1https://engineering.quora.com/Semantic-QuestionMatching-with-Deep-Learning 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 250 500 750 1000 1250 1500 1750 2000 2250 Number of text pairs Semeval A Negative Positive 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 100 200 300 400 500 Number of text pairs Semeval B Negative Positive 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 1000 2000 3000 4000 5000 6000 Number of text pairs Semeval C Negative Positive 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 2500 5000 7500 10000 12500 15000 17500 Number of text pairs Quora Negative Positive 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 100 200 300 400 500 Number of text pairs MSRP Negative Positive 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 100 200 300 400 500 600 Number of text pairs STS Negative Positive Figure 1: Lexical divergence distribution by labels across datasets. JSD=Jensen-Shannon divergence. 2794 STS. In the case of STS, we convert the scores into binary labels. Based on the description of the relatedness scores in Cer et al. (2017), we assign a positive label if relatedness ≥4 and a negative one otherwise to use a similar criterion as in the other datasets. 3 Lexical divergence in current datasets To characterise the datasets, we represent the text pairs as two distributions over words and measure their lexical divergence using Jensen-Shannon divergence (JSD) (Lin, 1991).2 Figure 1 shows the entire JSD distribution by label for each dataset. The datasets differ with respect to the degree of lexical divergence they contain: The three SemEval CQA datasets show a high degree of lexical divergence (majority > 0.5), especially in the external QA scenario (task C). Text pairs in MSRP tend to have low-medium JSD scores (majority < 0.6), while items in Quora and STS show the widest range of lexical divergence (see also Appendix A). Overall, pairs with negative labels tend to have higher JSD scores than pairs with positive labels. Especially in Quora, MSRP and STS, distinct distributions emerge for positive vs. negative labels, providing direct clues for label assignment. 4 Distinguishing between obvious and non-obvious examples As shown, pairs with high lexical divergence tend to have a negative label in the above datasets (e.g. No in Table 1), while low lexical divergence is associated with a positive label (e.g. Po in Table 1). Intuitively, these are cases which should be relatively easy to identify. More difficult are text pairs with a positive label but high lexical divergence (e.g. Pn in Table 1), or a negative label despite low lexical divergence (e.g. Nn in Table 1). We use Table 3 to categorise cases in terms of their difficulty level. positive label negative label low div obvious pos (Po) non-obvious neg (Nn) high div non-obvious pos (Pn) obvious neg (No) Table 3: Defining obvious and non-obvious similarity cases based on labels and lexical overlap. 2We also calculated set-based similarity metrics (Jaccard Index and Dice Coefficient) and found consistent results with JSD, but give preference to the distribution-based metric which is more natural for text. Due to space restrictions, we only report JSD in this paper. Fleiss’ Kappa Avg. time per pair Instances Po 0.6429 11.58s 35 Pn 0.0878 11.68s 15 No 0.3886 12.50s 34 Nn 0.0892 13.83s 16 total 0.6267 12.27s 100 Table 4: Statistics for manual annotation on Quora. o=obvious, n=non-obvious, N=negative, P=positive SemEval Quora MSRP STS A B C Po 5893 1162 2492 107612 2398 1597 Pn 4428 531 1590 41691 1502 409 No 8842 1843 22155 160410 1398 3900 Nn 7377 1213 21253 94632 503 2719 o 56 63 52 66 65 64 m 0.80 0.79 0.82 0.53 0.52 0.52 Table 5: Difficulty case splits across datasets (train, dev and test combined). o=obvious, m=median JSD. Pairs are categorised into high and low lexical divergence categories by comparing their JSD score to the median of the entire JSD distribution in order to account for differences between datasets (>median: high div, ≤median: low div). To verify if this automatic difficulty distinction corresponds with real-world difficulty, the authors of the study annotated the semantic relatedness of 100 random pairs from the Quora development set and measured inter-annotator agreement based on Fleiss’ Kappa. The agreement for non-obvious cases (Pn and Nn) is significantly lower (p-value< 0.01 with permutation test) than for obvious cases (Po and No) and the average annotation time per item is longer for non-obvious cases (Table 4), confirming the validity of this distinction. Table 5 shows the number of instances in the four cases across datasets. In all of the analysed datasets, there are more obvious positives (Po) than non-obvious positives (Pn) and more obvious negatives (No) than non-obvious negatives (Nn). All obvious cases combined (Po+No) make up more than 50% of pairs across all datasets. 5 Evaluating model predictions based on difficulty We now use this categorisation for the purpose of model evaluation (Tables 6-8).3 We calculate the 3Due to the lack of openly available model prediction files, we only present our analysis for the Se2795 KeLP Beihang MSRA IIT UHH ECNU bunji EICA Swiss Alps FuRong Wang FA3L Snow Man random TPRo 0.652 1.000 0.800 0.790 0.681 0.328 0.333 0.562 0.691 0.677 0.501 TPRn 0.496 1.000 0.676 0.636 0.575 0.269 0.223 0.399 0.478 0.469 0.499 TNRo 0.909 0.000 0.731 0.877 0.894 0.959 0.984 0.913 0.787 0.900 0.515 TNRn 0.908 0.000 0.676 0.820 0.851 0.953 0.950 0.892 0.751 0.757 0.536 F1o 0.751 0.682 0.781 0.829 0.765 0.480 0.494 0.684 0.731 0.765 0.513 F1n 0.628 0.686 0.686 0.707 0.672 0.410 0.352 0.533 0.560 0.555 0.519 F1 0.698 0.684 0.739 0.777 0.725 0.450 0.433 0.621 0.659 0.673 0.516 MAP 0.884 0.882 0.869 0.867 0.866 0.865 0.862 0.843 0.834 0.818 0.623 Table 6: Proposed evaluation metrics for top 10 primary submissions on SemEval Task A. The systems are ordered in columns according to their MAP ranking. Bold indicates the highest value for each metric. We indicate the 2nd and 3rd systems based on F1n and F1. Sim Bow LearningTo Question KeLP Talla Beihang MSRA NLM NIH Uinsuska TiTech IIT UHH SCIR QA FA3L random TPRo 0.976 1.000 0.920 0.760 1.000 0.880 0.752 0.704 0.912 0.448 0.552 TPRn 0.842 1.000 0.632 0.763 1.000 0.500 0.421 0.737 0.842 0.263 0.395 TNRo 0.609 0.000 0.831 0.684 0.000 0.841 0.858 0.682 0.709 0.861 0.495 TNRn 0.197 0.000 0.432 0.467 0.000 0.397 0.552 0.403 0.352 0.756 0.521 F1o 0.604 0.383 0.746 0.548 0.383 0.736 0.681 0.516 0.641 0.473 0.348 F1n 0.198 0.195 0.199 0.247 0.195 0.154 0.164 0.221 0.234 0.160 0.147 F1 0.424 0.312 0.506 0.426 0.312 0.473 0.467 0.390 0.464 0.365 0.280 MAP 0.472 0.469 0.467 0.457 0.448 0.446 0.434 0.431 0.427 0.422 0.298 Table 7: Proposed evaluation metrics for top 10 primary submissions on SemEval Task B. true positive rate TPR (for Po and Pn) and true negative rate TNR (for No and Nn) to analyse model performance within each difficulty category. In the three SemEval 2017 CQA tasks, all systems perform worse on the hard cases compared to the obvious cases (TPRn < TPRo and TNRn < TNRo), while there are only minor changes in the random baseline which predicts all classes with equal probability. To compare how well models do on IIT UHH bunji KeLP EICA random TPRo 0.570 0.246 0.911 0.006 0.520 TPRn 0.358 0.045 0.836 0.000 0.433 TNRo 0.898 0.991 0.720 0.998 0.502 TNRn 0.779 0.965 0.538 0.999 0.502 F1o 0.283 0.339 0.209 0.011 0.076 F1n 0.047 0.028 0.054 0.000 0.027 F1 0.144 0.197 0.121 0.008 0.053 MAP 0.155 0.147 0.144 0.135 0.058 Table 8: Proposed evaluation metrics for top 4 primary submissions on SemEval Task C. mEval CQA Tasks based on prediction files obtained from http://alt.qcri.org/semeval2017/task3/index.php?id=results. obvious vs. non-obvious cases overall, we compute F1 scores for obvious cases (Po and No) as F1o and non-obvious cases (Pn and Nn) as F1n separately. This is necessary as the high percentage of obvious cases (observed in section 4) can inflate the overall F1 score. F1n scores are consistently lower than the F1o scores. This difference is especially pronounced in Task B, which contained the highest proportion of obvious cases (62%) of the SemEval tasks. Using the non-obvious F1 scores results in a different ranking compared to the official SemEval evaluation metrics (F1 or MAP), even resulting in a change in the highest ranked system in Task B (Talla instead of KeLP or SimBow) and C (KeLP instead of bunji or IIT-UHH). 6 Conclusion We present an automated criterion for automatically distinguishing between easy and difficult items in text pair similarity prediction tasks. We find that more than 50% of cases in current datasets are relatively obvious. Recently proposed models perform significantly worse on nonobvious cases compared to obvious cases. In or2796 der to encourage the development of models that perform well on difficult items, we propose to use non-obvious F1 scores (F1n) as a complementary ranking metric for model evaluation. We also recommend publishing prediction files along with models to facilitate error analysis. Acknowledgments This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. References Petr Baudiˇs, Jan Pichl, Tom´aˇs Vyskoˇcil, and Jan ˇSediv´y. 2016. Sentence Pair Scoring: Towards Unified Framework for Text Comprehension. arXiv preprint arXiv:1603.06127. Daniele Bonadiman, Antonio Uva, and Alessandro Moschitti. 2017. Effective Shared Representations with Multitask Learning for Community Question Answering. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 726–732, Valencia, Spain. Association for Computational Linguistics. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. William B. Dolan and Chris Brockett. 2005. Automatically Constructing a Corpus of Sentential Paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP@IJCNLP), pages 9–16, Jeju Island, Korea. Asian Federation of Natural Language Processing. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. Hua He, Kevin Gimpel, and Jimmy Lin. 2015. MultiPerspective Sentence Similarity Modeling with Convolutional Neural Networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1576– 1586, Lisbon, Portugal. Association for Computational Linguistics. Divyansh Kaushik and Zachary C Lipton. 2018. How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5010–5015, Brussels, Belgium. Association for Computational Linguistics. Jianhua Lin. 1991. Divergence Measures based on the Shannon Entropy. IEEE Transactions on Information theory, 37(1):145–151. Preslav Nakov, Doris Hoogeveen, Ll´uis M`arquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, and Karin Verspoor. 2017. SemEval2017 Task 3: Community Question Answering. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval@ACL 2017), pages 27–48, Vancouver, Canada. Association for Computational Linguistics. Preslav Nakov, Lluis Marquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, James Glass, and Bilal Randeree. 2016. SemEval-2016 Task 3: Community Question Answering. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval@NAACL-HLT 2016), pages 525–545, San Diego, California. Association for Computational Linguistics. Preslav Nakov, Lluis Marquez, Magdy Walid, Alessandro Moschitti, James Glass, and Bilal Randeree. 2015. SemEval-2015 task 3: Answer Selection in Community Question Answering. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval@NAACL-HLT 2015), pages 269– 281, Denver, Colorado. Association for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don’t Know: Unanswerable Questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics. Joao Rodrigues, Chakaveh Saedi, Antonio Branco, and Joao Silva. 2018. Semantic Equivalence Detection: Are Interrogatives Harder than Declaratives? In Proceedings of the 11th International Conference on Language Resources and Evaluation, pages 3248–3253, Miyazaki, Japan. European Language Resources Association. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems (NIPS), pages 801–809, Granada, Spain. 2797 Gaurav Singh Tomar, Thyago Duque, Oscar T¨ackstr¨om, Jakob Uszkoreit, and Dipanjan Das. 2017. Neural Paraphrase Identification of Questions with Noisy Pretraining. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 142–147, Copenhagen, Denmark. Association for Computational Linguistics. Soumya Wadhwa, Khyathi Raghavi Chandu, and Eric Nyberg. 2018a. Comparative Analysis of Neural QA models on SQuAD. In Proceedings of the Workshop on Machine Reading for Question Answering, pages 89–97, Melbourne, Australia. Association for Computational Linguistics. Soumya Wadhwa, Varsha Embar, Matthias Grabmair, and Eric Nyberg. 2018b. Towards InferenceOriented Reading Comprehension: ParallelQA. In Proceedings of the Workshop on Generalization in the Age of Deep Learning, pages 1–7, New Orleans, Louisiana. Association for Computational Linguistics. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral Multi-Perspective Matching for Natural Language Sentences. In Proceedings of the TwentySixth International Joint Conference on Artificial Intelligence (IJCAI), pages 4144–4150, Melbourne, Australia. 2798 A Appendix 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 500 1000 1500 2000 2500 Number of text pairs Semeval A train_large test2016 test2017 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 100 200 300 400 500 Number of text pairs Semeval B train_large test2016 test2017 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 1000 2000 3000 4000 Number of text pairs Semeval C train_large test2016 test2017 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 5000 10000 15000 20000 Number of text pairs Quora train dev test 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 50 100 150 200 250 300 350 400 Number of text pairs MSRP train dev test 0.0 0.2 0.4 0.6 0.8 1.0 JSD 0 100 200 300 400 500 Number of text pairs STS train dev test Figure 2: Lexical divergence distribution by training, development and test set across different semantic similarity datasets. JSD=Jensen-Shannon divergence.
2019
268
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2799–2808 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2799 Putting Evaluation in Context: Contextual Embeddings improve Machine Translation Evaluation Nitika Mathur Timothy Baldwin Trevor Cohn School of Computing and Information Systems The University of Melbourne Victoria 3010, Australia [email protected] {tbaldwin,tcohn}@unimelb.edu.au Abstract Accurate, automatic evaluation of machine translation is critical for system tuning, and evaluating progress in the field. We proposed a simple unsupervised metric, and additional supervised metrics which rely on contextual word embeddings to encode the translation and reference sentences. We find that these models rival or surpass all existing metrics in the WMT 2017 sentence-level and systemlevel tracks, and our trained model has a substantially higher correlation with human judgements than all existing metrics on the WMT 2017 to-English sentence level dataset. 1 Introduction Evaluation metrics are a fundamental component of machine translation (MT) and other language generation tasks. The problem of assessing whether a translation is both adequate and coherent is a challenging text analysis problem, which is still unsolved, despite many years of effort by the research community. Shallow surfacelevel metrics, such as BLEU and TER (Papineni et al., 2002; Snover et al., 2006), still predominate in practice, due in part to their reasonable correlation to human judgements, and their being parameter free, making them easily portable to new languages. In contrast, trained metrics (Song and Cohn, 2011; Stanojevic and Sima’an, 2014; Ma et al., 2017; Shimanaka et al., 2018), which are learned to match human evaluation data, have been shown to result in a large boost in performance. This paper aims to improve over existing MT evaluation methods, through developing a series of new metrics based on contextual word embeddings (Peters et al., 2018; Devlin et al., 2019), a technique which captures rich and portable representations of words in context, which have been shown to provide important signal to many other NLP tasks (Rajpurkar et al., 2018). We propose a simple untrained model that uses off-theshelf contextual embeddings to compute approximate recall, when comparing a reference to an automatic translation, as well as trained models, including: a recurrent model over reference and translation sequences, incorporating attention; and the adaptation of an NLI method (Chen et al., 2017) to MT evaluation. These approaches, though simple in formulation, are highly effective, and rival or surpass the best approaches from WMT 2017. Moreover, we show further improvements in performance when our trained models are learned using noisy crowd-sourced data, i.e., having single annotations for more instances is better than collecting and aggregating multiple annotations for single instances. The net result is an approach that is more data efficient than existing methods, while producing substantially better human correlations.1 2 Related work MT metrics attempt to automatically predict the quality of a translation by comparing it to a reference translation of the same source sentence. Metrics such as BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) use n-gram matching or more explicit word alignment to match the system output with the reference translation. Character-level variants such as BEER, CHRF and CHARACTER overcome the problem of harshly penalising morphological variants, and perform surprisingly well despite their simplicity (Stanojevic and Sima’an, 2014; Popovi´c, 2015; Wang et al., 2016). In order to allow for variation in word choice and sentence structure, other metrics use information from shallow linguistic tools such as POStaggers, lemmatizers and synonym dictionaries (Banerjee and Lavie, 2005; Snover et al., 2006; Liu et al., 2010), or deeper linguistic informa1code is available at https://github.com/nitikam/mtevalin-context 2800 tion such as semantic roles, dependency relationships, syntactic constituents, and discourse roles (Gim´enez and M`arquez, 2007; Castillo and Estrella, 2012; Guzm´an et al., 2014). On the flip side, it is likely that these are too permissive of mistakes. More recently, metrics such as MEANT 2.0 (Lo, 2017) have adopted word embeddings (Mikolov et al., 2013) to capture the semantics of individual words. However, classic word embeddings are independent of word context, and context is captured instead using hand-crafted features or heuristics. Neural metrics such as ReVal and RUSE solve this problem by directly learning embeddings of the entire translation and reference sentences. ReVal (Gupta et al., 2015) learns sentence representations of the MT output and reference translation as a Tree-LSTM, and then models their interactions using the element-wise difference and angle between the two. RUSE (Shimanaka et al., 2018) has a similar architecture, but it uses pretrained sentence representations instead of learning the sentence representations from the data. The Natural Language Inference (NLI) task is similar to MT evaluation (Pad´o et al., 2009): a good translation entails the reference and viceversa. An irrelevant/wrong translation would be neutral/contradictory compared to the reference. An additional complexity is that MT outputs are not always fluent. On the NLI datasets, systems that include pairwise word interactions when learning sentence representations have a higher accuracy than systems that process the two sentences independently (Rockt¨aschel et al., 2016; Chen et al., 2017; Wang et al., 2017). In this paper, we attempt to introduce this idea to neural MT metrics. 3 Model We wish to predict the score of a translation t of length lt against a human reference r of length lr. For all models, we use fixed pre-trained contextualised word embeddings ek to represent each word in the MT output and reference translation, in the form of matrices Wt and Wr. 3.1 Unsupervised Model We use cosine similarity to measure the pairwise similarity between t and r based on the maximum similarity score for each word embedding ei ∈t with respect to each word embedding ej ∈r. We approximate recall of a word in r with its maximum similarity with any word in t. The final predicted score, y, for a translation is the average recall of its reference: recallj = lt max i=1 cosine(ei, ej) (1) y = lr X j=1 recallj lr (2) 3.2 Supervised Models Trained BiLSTM We first encode the embeddings of the translation and reference with a bidirectional LSTM, and concatenate the max-pooled and average-pooled hidden states of the BiLSTM to generate vt and vr, respectively: vs,max = ls max k=1 hs,k, vs,avg = ls X k=1 hs,k ls (3) vs = [vs,max; vs,avg] (4) To get the predicted score, we run a feedforward network over the concatenation of the sentence representations of t and r, and their element-wise product and difference (a useful heuristic first proposed by Mou et al. (2016)). We train the model by minimizing mean squared error with respect to human scores. m = [vt; vr; vt ⊙vr; vt −vr] (5) y = w⊺ReLU(W⊺m + b) + b′ (6) This is similar to RUSE, except that we learn the sentence representation instead of using pretrained sentence embeddings. Trained BiLSTM + attention To obtain a sentence representation of the translation which is conditioned on the reference, we compute the attention-weighted representation of each word in the translation. The attention weights are obtained by running a softmax over the dot product similarity between the hidden state of the translation and reference BiLSTM. Similarly, we compute the relevant representation of the reference: ai,j = hr ⊺ i htj (7) ˜hr = lt X j=1 exp(ai,j) Σiexp(ai,j) · ht (8) ˜ht = lr X i=1 exp(ai,j) Σjexp(ai,j) · hr (9) 2801 We then use ˜ht and ˜hr as our sentence representations in Eq. (3)–(6) to compute the final scores. Enhanced Sequential Inference Model (ESIM): We also directly adapt ESIM (Chen et al., 2017), a high-performing model on the Natural Language Inference task, to the MT evaluation setting. We treat the human reference translation and the MT output as the premise and hypothesis, respectively. The ESIM model first encodes r and t with a BiLSTM, then computes the attention-weighted representations of each with respect to the other (Eq. (7)–(9)). This model next “enhances” the representations of the translation (and reference) by capturing the interactions between ht and ˜ht (and hr and ˜hr): mr = [hr; ˜hr; hr ⊙˜hr; hr −˜hr] (10) mt = [ht; ˜ht; ht ⊙˜ht; ht −˜ht] (11) We use a feedforward projection layer to project these representations back to the model dimension, and then run a BiLSTM over each representation to compose local sequential information. The final representation of each pair of reference and translation sentences is the concatenation of the average-pooled and max-pooled hidden states of this BiLSTM. To compute the final predicted score, we apply a feedforward regressor over the concatenation of the two sentence representations. p = [vr,avg; vr,max; vt,avg; vt,max] (12) y = w⊺ReLU(W⊺p + b) + b′ (13) For all models, the predicted score of an MT system is the average predicted score of all its translations in the testset. 4 Experimental Setup We use human evaluation data from the Conference on Machine Translation (WMT) to train and evaluate our models (Bojar et al., 2016, 2017a), which is based on the Direct Assessment (“DA”) method (Graham et al., 2015, 2017). Here, system translations are evaluated by humans in comparison to a human reference translation, using a continuous scale (Graham et al., 2015, 2017). Each annotator assesses a set of 100 items, of which 30 items are for quality control, which is used to filter out annotators who are unskilled or careless. Individual worker scores are first standardised, and then the final score of an MT system is computed as the average score across all translations in the test set. Manual MT evaluation is subjective and difficult, and it is not possible even for a diligent human to be entirely consistent on a continuous scale. Thus, any human annotations are noisy by nature. To obtain an accurate score for individual translations, the average score is calculated from scores of at least 15 “good” annotators. This data is then used to evaluate automatic metrics at the sentence level (Graham et al., 2015). We train on the human evaluation data of news domain of WMT 2016, which is entirely crowdsourced. The sentence-level-metric evaluation data consists of accurate scores for 560 translations each for 6 to-English language pairs and English-to-Russian (we call this the “TrainS” dataset). The dataset also includes mostly singly-annotated2 DA scores for around 125 thousand translations from six source languages into English, and 12.5 thousand translations from English-to-Russian (“TrainL” dataset), that were collected to obtain human scores for MT systems. For the validation set, we use the sentencelevel DA judgements collected for the WMT 2015 data (Bojar et al., 2015): 500 translation-reference pairs each of four to-English language pairs, and English-to-Russian. For more details on implementation and training of our models, see Appendix A. We test our metrics on all language pairs from the WMT 2017(Bojar et al., 2017b) news task in both the sentence and system level setting, and evaluate using Pearson’s correlation between our metrics’ predictions and the Human DA scores. For the sentence level evaluation, insufficient DA annotations were collected for five fromEnglish language pairs, and these were converted to preference judgements. If two MT system translations of a source sentence were evaluated by at least two reliable annotators, and the average score for System A is reasonably greater than the average score of System B, then this is interpreted as a Relative Ranking (DARR) judgement where Sys A is better than Sys B. The metrics are then evaluated using (a modified version of) Kendall’s Tau correlation over these preference judgements. We also evaluate on out-of-domain, system 2about 15% of the translations have a repeat annotation collected as part of quality-control 2802 level data for five from-English language pairs from the WMT 2016 IT task. 5 Results Tab. 1 compares the performance of our proposed metrics against existing metrics on the WMT 17 to-English news dataset. MEANT 2.0 (Lo, 2017) is the best untrained metric — it uses pre-trained word2vec embeddings (Mikolov et al., 2013)—, and RUSE (Shimanaka et al., 2018) is the best trained metric. We also include SENT-BLEU and CHRF baselines. Our simple average recall metric (“BERTR”) has a higher correlation than all existing metrics, and is highly competitive with RUSE. When trained on the sentence-level data (as with RUSE), the BiLSTM baseline does not perform well, however adding attention makes it competitive with RUSE. The ESIM model — which has many more parameters — underperforms compared to the BiLSTM model with attention. However, the performance of all models improves substantially when these metrics are trained on the larger, singly-annotated training data (denoted “TrainL”), i.e., using data from only those annotators who passed quality control. Clearly the additional input instances make up for the increased noise level in the prediction variable. The simple BiLSTM model performs as well as RUSE, and both the models with attention substantially outperform this benchmark. In this setting, we look at how the performance of ESIM improves as we increase the number of training instances (Fig. 1). We find that on the same number of training instances (3360), the model performs better on cleaner data compared to singly-annotated data (r = 0.57 vs 0.64). However, when we have a choice between collecting multiple annotations for the same instances vs collecting annotations for additional instances, the second strategy leads to more gains. We now evaluate the unsupervised BERTR model and the ESIM model (trained on the large dataset) in the other settings. In the sentence level tasks out-of-English (Tab. 4), the BERTR model (based on BERT-Chinese) significantly outperforms all metrics in the English-to-Chinese testset. For other language pairs, BERTR (based on multilingual BERT) is highly competitive with other metrics. ESIM performs well in the language pairs that are evaluated using Pearson’s cor10000 20000 30000 40000 Num Annotations collected 0.550 0.575 0.600 0.625 0.650 0.675 0.700 0.725 Pearson’s r with human scores single annotation per translation multiple annotations for a set of 3360 translations Figure 1: Average Pearson’s r for ESIM over the WMT 2017 to-English sentence-level dataset vs. the total number of annotations in the training set. We contrast two styles of collecting data: (1) the circles are trained on a single annotation per instance; and (2) the crosses are trained on the mean of N annotations per instance, as N goes from 1 to 14. The first strategy is more data-efficient. relation. But the results are mixed when evaluated based on preference judgements. This could be an effect of our training method – using squared error as part of regression loss – being better suited to Pearson’s r — and might be resolved through a different loss, such as hinge loss over pairwise preferences which would better reflect Kendall’s Tau (Stanojevic and Sima’an, 2014). Furthermore, ESIM is trained only on to-English and to-Russian data. It is likely that including more language pairs in the training data will increase correlation. On the system level evaluation of the news domain, both metrics are competitive with all other metrics in all language pairs both to- and out-ofEnglish (see Tab. 3 and Tab. 4 in Appendix B). In the IT domain, we have mixed results (Tab. 5 in the Appendix). ESIM significantly outperforms all other metrics in English–Spanish, is competitive in two other language pairs, and is outperformed by other metrics in the remaining two language pairs. 5.1 Qualitative Analysis We manually inspect translations in the validation set. Tab. 6 in Appendix C shows examples of good translations, where our proposed metrics correctly recognise synonyms and valid word re-orderings, unlike SENT-BLEU. However, none of the metrics recognise a different way of expressing the same meaning. From Tab. 7, we see that SENTBLEU gives high scores to translations with high partial overlap with the reference, but ESIM cor2803 cs–en de–en fi–en lv–en ru–en tr–en zh–en AVE. Baselines BLEU 0.435 0.432 0.571 0.393 0.484 0.538 0.512 0.481 CHRF 0.514 0.531 0.671 0.525 0.599 0.607 0.591 0.577 MEANT 2.0 0.578 0.565 0.687 0.586 0.607 0.596 0.639 0.608 RUSE 0.614 0.637 0.756 0.705 0.680 0.704 0.677 0.682 P BERTR 0.655 0.650 0.777 0.671 0.680 0.702 0.687 0.689 TrainS BiLSTM 0.517 0.556 0.735 0.672 0.606 0.619 0.565 0.610 BiLSTM + attention 0.611 0.603 0.763 0.740 0.655 0.695 0.694 0.680 ESIM 0.534 0.546 0.757 0.704 0.621 0.632 0.629 0.632 TrainL BiLSTM 0.628 0.621 0.774 0.732 0.689 0.682 0.655 0.682 BiLSTM + attention 0.704 0.710 0.818 0.777 0.744 0.753 0.737 0.749 ESIM 0.692 0.706 0.829 0.764 0.726 0.776 0.732 0.746 Table 1: Pearson’s r on the WMT 2017 sentence-level evaluation data. P: Unsupervised metric that relies on pretrained embeddings; TrainS: trained on accurate 3360 instances; TrainL: trained on noisy 125k instances. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold (William’s test; Graham and Baldwin, 2014) en–cs en–de en–fi en–lv en–ru en–tr en–zh τ τ τ τ ρ τ ρ Baselines SENT-BLEU 0.274 0.269 0.446 0.259 0.468 0.377 0.642 CHRF 0.376 0.336 0.503 0.420 0.605 0.466 0.608 BEER 0.398 0.336 0.557 0.420 0.569 0.490 0.622 MEANT 2.0-NOSRL 0.395 0.324 0.565 0.425 0.636 0.482 0.705 MEANT 2.0 – – – – – – 0.727 P BERTR 0.390 0.365 0.564 0.417 0.630 0.457 0.803 T ESIM 0.338 0.362 0.523 0.350 0.700 0.506 0.699 Table 2: Pearson’s r and Kendall’s τ on the WMT 2017 from-English system-level evaluation data. The first section represents existing metrics, both trained and untrained. We then present results of our unsupervised metric, followed by our supervised metric trained in the TrainL setting: noisy 125k instances. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold (William’s test (Graham and Baldwin, 2014) for Pearson’s r and Bootstrap (Efron and Tibshirani, 1993) for Kendall’s τ.) rectly recognises then as low quality translations. However, in some cases, ESIM can be too permissive of bad translations which contain closely related words. There are also examples where a small difference in words completely changes the meaning of the sentence, but all the metrics score these translations highly. 6 Conclusion We show that contextual embeddings are very useful for evaluation, even in simple untrained models, as well as in deeper attention based methods. When trained on a larger, much noisier range of instances, we demonstrate a substantial improvement over the state of the art. In future work, we plan to extend these models by using cross-lingual embeddings, and combine information from translation–source interactions as well as translation–reference interactions. There are also direct applications to Quality Estimation, by using the source instead of the reference. Acknowledgements We thank the anonymous reviewers for their feedback and suggestions to improve the paper. This work was supported in part by the Australian Research Council. This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. 2804 References Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017a. Findings of the 2017 Conference on Machine Translation (WMT17). In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 169–214, Copenhagen, Denmark. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 Conference on Machine Translation. In Proceedings of the First Conference on Machine Translation, pages 131– 198, Berlin, Germany. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1–46, Lisbon, Portugal. Ondˇrej Bojar, Yvette Graham, and Amir Kamran. 2017b. Results of the wmt17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 489–513, Copenhagen, Denmark. Julio Castillo and Paula Estrella. 2012. Semantic textual similarity for MT evaluation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 52–58. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2019), Minneapolis, USA. Bradley Efron and Robert Tibshirani. 1993. An introduction to the bootstrap, volume 57. CRC press. Jes´us Gim´enez and Llu´ıs M`arquez. 2007. Linguistic features for automatic evaluation of heterogenous MT systems. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 256– 264. Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In EMNLP. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation systems be evaluated by the crowd alone. Natural Language Engineering, 23(1):3–30. Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies, pages 1183–1191, Denver, USA. Rohit Gupta, Constantin Orasan, and Josef van Genabith. 2015. ReVal: A simple and effective machine translation evaluation metric based on recurrent neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1066–1072, Lisbon, Portugal. Francisco Guzm´an, Shafiq R Joty, Llu´ıs M`arquez, and Preslav Nakov. 2014. Using discourse structure improves machine translation evaluation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pages 687–698. Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2010. TESLA: Translation evaluation of sentences with linear-programming-based analysis. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and Metrics (MATR), pages 354–359. Chi-kiu Lo. 2017. MEANT 2.0: Accurate semantic MT evaluation for any output language. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 589–597, Copenhagen, Denmark. Qingsong Ma, Yvette Graham, Shugen Wang, and Qun Liu. 2017. Blend: a novel combined MT metric based on direct assessment — CASICT-DCU submission to WMT17 metrics task. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 598–603, Copenhagen, Denmark. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. 2805 Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 130–136. Sebastian Pad´o, Michel Galley, Dan Jurafsky, and Chris Manning. 2009. Robust machine translation evaluation with entailment features. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 297–305. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), pages 311– 318, Philadelphia, USA. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In International Conference on Learning Representations (ICLR). Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. RUSE: Regressor using sentence embeddings for automatic machine translation evaluation. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 764–771, Belgium, Brussels. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the Association for Machine Transaltion in the Americas, pages 223–231. Xingyi Song and Trevor Cohn. 2011. Regression and ranking based optimisation for sentence level machine translation evaluation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 123–129. Milos Stanojevic and Khalil Sima’an. 2014. BEER: BEtter evaluation as ranking. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 414–419, Baltimore, USA. Weiyue Wang, Jan-Thorsten Peter, Hendrik Rosendahl, and Hermann Ney. 2016. CharacTer: Translation edit rate on character level. In Proceedings of the First Conference on Machine Translation, pages 505–510, Berlin, Germany. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4144–4150. A Implementation details We implement our models using AllenNLP in PyTorch. We experimented with both ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) embeddings, and found that BERT consistently performs as well as, or better than ELMo, thus we report results using only BERT embeddings in this paper. For BERTR, we use the top layer embeddings of the wordpieces of the MT and Reference translations. We use bert base uncased for all toEnglish language pairs, bert base chinese models for English-to-Chinese and bert base multilingual cased for the remaining to-English language pairs. For the trained metrics, we learn a weighted average of all layers of BERT embeddings. On the to-English testsets, we use bert base uncased embeddings and train on the WMT16 to-English data. On all other testsets, we use the bert base multilingual cased embeddings and train on the WMT 2016 English-toRussian, as well as all to-English data. Following the recommendations of the original ESIM paper, we fix the dimension of the BiLSTM hidden state to 300 and set the Dropout rate to 0.5. We use the Adam optimizer with an initial learning rate of 0.0004 and batch size of 32, and use early stopping on the validation dataset. Training the ESIM model on the full dataset takes around two hours on a single V100 GPU, and all models take less than two minutes to evaluate a standard WMT dataset of 3000 translations. 2806 B System-level results for WMT 17 news and WMT 2016 IT domain cs–en de–en fi–en lv–en ru–en tr–en zh–en num systems 4 11 6 9 9 10 16 Baselines BLEU 0.971 0.923 0.903 0.979 0.912 0.976 0.864 CHRF 0.939 0.968 0.938 0.968 0.952 0.944 0.859 CHARACTER 0.972 0.974 0.946 0.932 0.958 0.949 0.799 BEER 0.972 0.960 0.955 0.978 0.936 0.972 0.902 RUSE 0.990 0.968 0.977 0.962 0.953 0.991 0.974 P BERTR 0.996 0.971 0.948 0.980 0.950 0.994 0.970 T ESIM 0.983 0.949 0.985 0.974 0.921 0.986 0.901 Table 3: Pearson’s r on the WMT 2017 to-English system-level evaluation data. The first section represents existing metrics, both trained and untrained. We then present results of our unsupervised metric, followed by our supervised metric trained in the TrainL setting: noisy 130k instances. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. en–cs en–de en–fi en–lv en–ru en–tr en–zh num systems 14 16 12 17 9 8 11 Baselines BLEU 0.956 0.804 0.920 0.866 0.898 0.924 0.981 BEER 0.970 0.842 0.976 0.930 0.944 0.980 0.914 CHARACTER 0.981 0.938 0.972 0.897 0.939 0.975 0.933 CHRF 0.976 0.863 0.981 0.955 0.950 0.991 0.976 P BERTR 0.982 0.877 0.979 0.949 0.971 0.996 0.992 T ESIM 0.974 0.861 0.971 0.954 0.968 0.978 0.970 Table 4: Pearson’s r on the WMT 2017 from-English system-level evaluation data. The first section represents existing metrics, both trained and untrained. We then present results of our unsupervised metric, followed by our supervised metric trained in the TrainL setting: noisy 130k instances. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. en–cs en–de en–es en–nl en–pt num systems 5 10 4 4 4 Baselines BLEU 0.750 0.621 0.976 0.596 0.997 CHRF 0.845 0.588 0.915 0.951 0.967 BEER 0.744 0.621 0.931 0.983 0.989 CHARACTER 0.901 0.930 0.963 0.927 0.976 P BERTR 0.974 0.780 0.925 0.896 0.980 T ESIM 0.964 0.780 0.991 0.798 0.996 Table 5: Pearson’s r on the WMT 2016 IT domain system-level evaluation data. The first section represents existing metrics, both trained and untrained. We then present results of our pretrained metric, followed by our supervised metric trained in the TrainL setting: noisy 130k instances. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. 2807 C Qualitative analysis Translations with HIGH Human scores ESIM BERTR SENTBLEU ref: The negotiations have been scheduled to take place next Saturday, the Russian Minister of Energy, Alexander Nowak, said on Monday. sys: The negotiations are scheduled for coming Saturday, said the Russian energy minister Alexander Nowak on Monday. ref: Lesotho military says no coup planned; PM stays in South Africa sys: Lesotho-military member says that no coup is planned; Prime Minister remains in South Africa HIGH HIGH LOW ref: In September 2011, Abbott’s condition worsened again, and his consultant took his CT scans and X-rays to a panel of experts. sys: In September 2011 Abbotts state worsened again and his family doctor brought his CT-Scans and X-rays to an expert group. ref: The boardroom is now contemplating the possibility of working together. HIGH LOW LOW sys: Now the boards are thinking about a possible cooperation. ref: He ended up spending a month off work. sys: In the end, he could not go to work for a month. LOW LOW LOW Table 6: Examples of good translations in the WMT 2015 sentence level DA dataset and whether ESIM, BERTR and SENT-BLEU correctly give them high scores 2808 Translations with LOW Human scores ESIM BERTR SENTBLEU ref: For the benefit of the school, Richter nurtured a good relationship with the then Mayor, Ludwig Gtz (CSU). sys: For the good of the school of judges as rector of a good relationship with the former mayor Ludwig Gtz (CSU) ref: The military plays an important role in Pakistan and has taken power by force several times in the past. LOW LOW HIGH sys: The military plays an important role in Pakistan and has already more frequently geputscht. ref: Behind much of the pro-democracy campaign in Hong Kong is the Occupy Central With Love and Peace movement, whose organizers have threatened to shut down the financial district if Beijing does not grant authentic universal suffrage. sys: Behind the pro-democracy campaign in Hong Kong is the movement Occupy Central With Love and Peace, whose organizers have threatened the acupuncture, off, if Beijing allows no real universal suffrage. LOW HIGH HIGH ref: Foreign goods trade had slowed, too. sys: Foreign trade also slowed the economy. HIGH LOW LOW ref: Some shrapnel pieces are still in my knee. sys: Some garnet fragments are still in my knee. ref: Stewart hit the wall for the second time after his right front tire blew out on lap 172, ending his night. HIGH HIGH HIGH sys: Stewart raced for the second time against the wall after his right front tire on lap 172 and ended his evening. Table 7: Examples of bad quality translations in the WMT 2015 sentence level DA dataset and whether ESIM, BERTR and SENT-BLEU correctly give them low scores
2019
269
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 273–291 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 273 Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation Benjamin Heinzerling†∗and Michael Strube‡ †RIKEN AIP & Tohoku University ‡Heidelberg Institute for Theoretical Studies gGmbH [email protected] | [email protected] Abstract Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work, we conduct an extensive evaluation comparing non-contextual subword embeddings, namely FastText and BPEmb, and a contextual representation method, namely BERT, on multilingual named entity recognition and part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and character representations works well across languages and tasks. A more detailed analysis reveals different strengths and weaknesses: Multilingual BERT performs well in medium- to high-resource languages, but is outperformed by non-contextual subword embeddings in a low-resource setting. 1 Introduction Rare and unknown words pose a difficult challenge for embedding methods that rely on seeing a word frequently during training (Bullinaria and Levy, 2007; Luong et al., 2013). Subword segmentation methods avoid this problem by assuming a word’s meaning can be inferred from the meaning of its parts. Linguistically motivated subword approaches first split words into morphemes and then represent word meaning by composing morpheme embeddings (Luong et al., 2013). More recently, character-ngram approaches (Luong and Manning, 2016; Bojanowski et al., 2017) and Byte Pair Encoding (BPE) (Sennrich et al., 2016) have grown in popularity, likely due to their computational simplicity and language-agnosticity.1 ∗Work done while at HITS. 1While language-agnostic, these approaches are not language-independent. See Appendix B for a discussion. _magn us _car ls en _played M a g n u s charRNN C a r l s e n charRNN p l a y e d charRNN Magnus Carl ##sen played Multilingual BERT Magnus Carlsen played B-PER I-PER O BPEmb Figure 1: A high-performing ensemble of subword representations encodes the input using multilingual BERT (yellow, bottom left), an LSTM with BPEmb (pink, bottom middle), and a character-RNN (blue, bottom right). A meta-LSTM (green, center) combines the different encodings before classification (top). Horizontal arrows symbolize bidirectional LSTMs. Sequence tagging with subwords. Subword information has long been recognized as an important feature in sequence tagging tasks such as named entity recognition (NER) and part-ofspeech (POS) tagging. For example, the suffix -ly often indicates adverbs in English POS tagging and English NER may exploit that professions often end in suffixes like -ist (journalist, cyclist) or companies in suffixes like -tech or soft. In early systems, these observations were operationalized with manually compiled lists of such word endings or with character-ngram features (Nadeau and Sekine, 2007). Since the advent of neural sequence tagging (Graves, 2012; 274 Method Subword segmentation and token transformation Original text Magnus Carlsen played against Viswanathan Anand Characters M a g n u s C a r l s e n p l a y e d a g a i n s t V i s w a n a t h a n A n a n d Word shape Aa Aa a a Aa Aa FastText magnus+mag+. . . carlsen+car+arl+. . . played+. . . against+. . . vis+isw+. . . +nathan ana+. .. BPE vs1000 m ag n us car l s en play ed against v is w an ath an an and BPE vs3000 mag n us car ls en played against vis w an ath an an and BPE vs5000 magn us car ls en played against vis wan ath an an and BPE vs10000 magn us car ls en played against vis wan athan an and BPE vs25000 magnus car ls en played against vis wan athan an and BPE vs50000 magnus carls en played against vis wan athan anand BPE vs100000 magnus carlsen played against viswan athan anand BERT Magnus Carl ##sen played against V ##is ##wana ##than Anand Table 1: Overview of the subword segmentations and token transformations evaluated in this work. Huang et al., 2015), the predominant way of incorporating character-level subword information is learning embeddings for each character in a word, which are then composed into a fixedsize representation using a character-CNN (Chiu and Nichols, 2016) or character-RNN (char-RNN) (Lample et al., 2016). Moving beyond single characters, pretrained subword representations such as FastText, BPEmb, and those provided by BERT (see §2) have become available. While there now exist several pretrained subword representations in many languages, a practitioner faced with these options has a simple question: Which subword embeddings should I use? In this work, we answer this question for multilingual named entity recognition and part-of-speech tagging and make the following contributions: • We present a large-scale evaluation of multilingual subword representations on two sequence tagging tasks; • We find that subword vocabulary size matters and give recommendations for choosing it; • We find that different methods have different strengths: Monolingual BPEmb works best in medium- and high-resource settings, multilingual non-contextual subword embeddings are best in low-resource languages, while multilingual BERT gives good or best results across languages. 2 Subword Embeddings We now introduce the three kinds of multilingual subword embeddings compared in our evaluation: FastText and BPEmb are collections of pretrained, monolingual, non-contextual subword embeddings available in many languages, while BERT provides contextual subword embeddings for many languages in a single pretrained language model with a vocabulary shared among all languages. Table 1 shows examples of the subword segmentations these methods produce. 2.1 FastText: Character-ngram Embeddings FastText (Bojanowski et al., 2017) represents a word w as the sum of the learned embeddings ⃗zg of its constituting character-ngrams g and, in case of in-vocabulary words, an embedding ⃗zw of the word itself: ⃗w = ⃗zw + P g∈Gw ⃗zg, where Gw is the set of all constituting character n-grams for 3 ≤n ≤6. Bojanowski et al. provide embeddings trained on Wikipedia editions in 294 languages.2 2.2 BPEmb: Byte-Pair Embeddings Byte Pair Encoding (BPE) is an unsupervised segmentation method which operates by iteratively merging frequent pairs of adjacent symbols into new symbols. E.g., when applied to English text, BPE merges the characters h and e into the new byte-pair symbol he, then the pair consisting of the character t and the byte-pair symbol he into the new symbol the and so on. These merge operations are learned from a large background corpus. The set of byte-pair symbols learned in this fashion is called the BPE vocabulary. Applying BPE, i.e. iteratively performing learned merge operations, segments a text into subwords (see BPE segmentations for vocabulary sizes vs1000 to vs100000 in Table 1). By employing an embedding algorithm, e.g. GloVe (Pennington et al., 2014), to train embeddings on such a subword-segmented text, one obtains 2https://fasttext.cc/docs/en/ pretrained-vectors.html 275 embeddings for all byte-pair symbols in the BPE vocabulary. In this work, we evaluate BPEmb (Heinzerling and Strube, 2018), a collection of byte-pair embeddings trained on Wikipedia editions in 275 languages.3 2.3 BERT: Contextual Subword Embeddings One of the drawbacks of the subword embeddings introduced above, and of pretrained word embeddings in general, is their lack of context. For example, with a non-contextual representation, the embedding of the word play will be the same both in the phrase a play by Shakespeare and the phrase to play Chess, even though play in the first phrase is a noun with a distinctly different meaning than the verb play in the second phrase. Contextual word representations (Dai and Le, 2015; Melamud et al., 2016; Ramachandran et al., 2017; Peters et al., 2018; Radford et al., 2018; Howard and Ruder, 2018) overcome this shortcoming via pretrained language models. Instead of representing a word or subword by a lookup of a learned embedding, which is the same regardless of context, a contextual representation is obtained by encoding the word in context using a neural language model (Bengio et al., 2003). Neural language models typically employ a sequence encoder such as a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) or Transformer (Vaswani et al., 2017). In such a model, each word or subword in the input sequence is encoded into a vector representation. With a bidirectional LSTM, this representation is influenced by its left and right context through state updates when encoding the sequence from left to right and from right to left. With a Transformer, context influences a word’s or subword’s representation via an attention mechanism (Bahdanau et al., 2015). In this work we evaluate BERT (Devlin et al., 2019), a Transformer-based pretrained language model operating on subwords similar to BPE (see last row in Table 1). We choose BERT among the pretrained language models mentioned above since it is the only one for which a multilingual version is publicly available. Multilingual BERT4 has been trained on the 104 largest Wikipedia editions, so that, in contrast to FastText and BPEmb, many low-resource languages are not supported. 3https://nlp.h-its.org/bpemb/ 4https://github.com/google-research/ bert/blob/f39e881/multilingual.md Method #languages Intersect. 1 Intersect. 2 FastText 294 ) 265      101 Pan17 282 BPEmb 275 BERT 104 Table 2: Number of languages supported by the three subword embedding methods compared in our evaluation, as well as the NER baseline system (Pan17). 3 Multilingual Evaluation We compare the three different pretrained subword representations introduced in §2 on two tasks: NER and POS tagging. Our multilingual evaluation is split in four parts. After devising a sequence tagging architecture (§3.1), we investigate an important hyper-parameter in BPE-based subword segmentation: the BPE vocabulary size (§3.2). Then, we conduct NER experiments on two sets of languages (see Table 2): 265 languages supported by FastText and BPEmb (§3.3) and the 101 languages supported by all methods including BERT (§3.4). Our experiments conclude with POS tagging on 27 languages (§3.4). Data. For NER, we use WikiAnn (Pan et al., 2017), a dataset containing named entity mention and three-class entity type annotations in 282 languages. WikiAnn was automatically generated by extracting and classifying entity mentions from inter-article links on Wikipedia. Because of this, WikiAnn suffers from problems such as skewed entity type distributions in languages with small Wikipedias (see Figure 6 in Appendix A), as well as wrong entity types due to automatic type classification. These issues notwithstanding, WikiAnn is the only available NER dataset that covers almost all languages supported by the subword representations compared in this work. For POS tagging, we follow Plank et al. (2016); Yasunaga et al. (2018) and use annotations from the Universal Dependencies project (Nivre et al., 2016). These annotations take the form of language-universal POS tags (Petrov et al., 2012), such as noun, verb, adjective, determiner, and numeral. 3.1 Sequence Tagging Architecture Our sequence tagging architecture is depicted in Figure 1. The architecture is modular and allows encoding text using one or more subword embedding methods. The model receives a sequence of tokens as input, here Magnus Carlsen played. After subword segmentation and an embedding 276 lookup, subword embeddings are encoded with an encoder specific to the respective subword method. For BERT, this is a pretrained Transformer, which is finetuned during training. For all other methods we train bidirectional LSTMs. Depending on the particular subword method, input tokens are segmented into different subwords. Here, BERT splits Carlsen into two subwords resulting in two encoder states for this token, while BPEmb with an LSTM encoder splits this word into three. FastText (not depicted) and character RNNs yield one encoder state per token. To match subword representations with the tokenization of the gold data, we arbitrarily select the encoder state corresponding to the first subword in each token. A meta-LSTM combines the token representations produced by each encoder before classification.5 Decoding the sequence of a neural model’s pre-classification states with a conditional random field (CRF) (Lafferty et al., 2001) has been shown to improve NER performance by 0.7 to 1.8 F1 points (Ma and Hovy, 2016; Reimers and Gurevych, 2017) on a benchmark dataset. In our preliminary experiments on WikiAnn, CRFs considerably increased training time but did not show consistent improvements across languages.6 Since our study involves a large number of experiments comparing several subword representations with cross-validation in over 250 languages, we omit the CRF in order to reduce model training time. Implementation details. Our sequence tagging architecture is implemented in PyTorch (Paszke et al., 2017). All model hyper-parameters for a given subword representation are tuned in preliminary experiments on development sets and then kept the same for all languages (see Appendix D). For many low-resource languages, WikiAnn provides only a few hundred instances with skewed entity type distributions. In order to mitigate the impact of variance from random train-devtest splits in such cases, we report averages of n-fold cross-validation runs, with n=10 for lowresource, n=5 for medium-resource, and n=3 for high-resource languages.7 For experiments in5In preliminary experiments (results not shown), we found that performing classification directly on the concatenated token representation without such an additional LSTM on top does not work well. 6The system we compare to as baseline (Pan et al., 2017) includes a CRF but did not report an ablation without it. 7Due to high computational resource requirements, we set n=1 for finetuning experiments with BERT. 102 103 104 105 106 Dataset size (#instances) 100000 50000 25000 10000 5000 3000 1000 Best BPE vocabulary size Figure 2: The best BPE vocabulary size varies with dataset size. For each of the different vocabulary sizes, the box plot shows means and quartiles of the dataset sizes for which this vocabulary size is optimal, according to the NER F1 score on the respective development set in WikiAnn. E.g., the bottom, pink box records the sizes of the datasets (languages) for which BPE vocabulary size 1000 was best, and the top, blue box the dataset sizes for which vocabulary size 100k was best. volving FastText, we precompute a 300d embedding for each word and update embeddings during training. We use BERT in a finetuning setting, that is, we start training with a pretrained model and then update that model’s weights by backpropagating through all of BERT’s layers. Finetuning is computationally more expensive, but gives better results than feature extraction, i.e. using one or more of BERT’s layers for classification without finetuning (Devlin et al., 2019). For BPEmb, we use 100d embeddings and choose the best BPE vocabulary size as described in the next subsection. 3.2 Tuning BPE In subword segmentation with BPE, performing only a small number of byte-pair merge operations results in a small vocabulary. This leads to oversegmentation, i.e., words are split into many short subwords (see BPE vs1000 in Table 1). With more merge operations, both the vocabulary size and the average subword length increase. As the byte-pair vocabulary grows larger it adds symbols corresponding to frequent words, resulting in such words not being split into subwords. Note, for example, that the common English preposition against is not split even with the smallest vocabulary size, or that played is split into the stem play and suffix ed with a vocabulary of size 1000, but is not split with larger vocabulary sizes. The choice of vocabulary size involves a tradeoff. On the one hand, a small vocabulary re277 BPEmb MultiBPEmb+char Languages Pan17 FastText BPEmb +char +shape +someshape -finetune +finetune All (265) 83.9 79.8 83.7 85.0 85.0 85.3 89.2 91.4 Low-res. (188) 81.6 76.7 79.7 81.4 81.5 81.9 89.7 90.4 Med-res. (48) 90.0 88.3 93.6 94.1 93.9 93.9 91.1 94.9 High-res. (29) 89.2 85.6 93.0 93.6 93.2 93.2 82.3 92.2 Table 3: NER results on WikiAnn. The first row shows macro-averaged F1 scores (%) for all 265 languages in the Intersect. 1 setting. Rows two to four break down scores for 188 low-resource languages (<10k instances), 48 medium-resource languages (10k to 100k instances), and 29 high-resource languages (>100k instances). quires less data for pre-training subword embeddings since there are fewer subwords for which embeddings need to be learned. Furthermore, a smaller vocabulary size is more convenient for model training since training time increases with vocabulary size (Morin and Bengio, 2005) and hence a model with a smaller vocabulary trains faster. On the other hand, a small vocabulary results in less meaningful subwords and longer input sequence lengths due to oversegmentation. Conversely, a larger BPE vocabulary tends to yield longer, more meaningful subwords so that subword composition becomes easier – or in case of frequent words even unnecessary – in downstream applications, but a larger vocabulary also requires a larger text corpus for pre-training good embeddings for all symbols in the vocabulary. Furthermore, a larger vocabulary size requires more annotated data for training larger neural models and increases training time. Since the optimal BPE vocabulary size for a given dataset and a given language is not a priori clear, we determine this hyper-parameter empirically. To do so, we train NER models with varying BPE vocabulary sizes8 for each language and record the best vocabulary size on the language’s development set as a function of dataset size (Figure 2). This data shows that larger vocabulary sizes are better for high-resource languages with more training data, and smaller vocabulary sizes are better for low-resource languages with smaller datasets. In all experiments involving byte-pair embeddings, we choose the BPE vocabulary size for the given language according to this data.9 3.3 NER with FastText and BPEmb In this section, we evaluate FastText and BPEmb on NER in 265 languages. As baseline, we com8We perform experiments with vocabulary sizes in {1000, 3000, 5000, 10000, 25000, 50000, 100000}. 9The procedure for selecting BPE vocabulary size is given in Appendix C. Figure 3: Impact of word shape embeddings on NER performance in a given language as function of the capitalization ratio in a random Wikipedia sample. pare to Pan et al. (2017)’s system, which combines morphological features mined from Wikipedia markup with cross-lingual knowledge transfer via Wikipedia language links (Pan17 in Table 3). Averaged over all languages, FastText performs 4.1 F1 points worse than this baseline. BPEmb is on par overall, with higher scores for medium- and high-resource languages, but a worse F1 score on low-resource languages. BPEmb combined with character embeddings (+char) yields the overall highest scores for medium- and high-resource languages among monolingual methods. Word shape. When training word embeddings, lowercasing is a common preprocessing step (Pennington et al., 2014) that on the one hand reduces vocabulary size, but on the other loses information in writing systems with a distinction between upper and lower case letters. As a more expressive alternative to restoring case information via a binary feature indicating capitalized or lowercased words (Curran and Clark, 2003), word shapes (Collins, 2002; Finkel et al., 2005) map 278 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Byte-pair symbol length 0k 20k 40k 60k 80k 100k 120k 140k Count Vocab. size 100k 320k 1000k Figure 4: The distribution of byte-pair symbol lengths varies with BPE vocabulary size. BPE vocabulary size 100k 320k 1000k Dev. F1 87.1 88.7 89.3 Table 4: Average WikiAnn NER F1 scores on the development sets of 265 languages with shared vocabularies of different size. characters to their type and collapse repeats. For example, Magnus is mapped to the word shape Aa and G.M. to A.A. Adding such shape embeddings to the model (+shape in Table 3) yields similar improvements as character embeddings. Since capitalization is not important in all languages, we heuristically decide whether shape embeddings should be added for a given language or not. We define the capitalization ratio of a language as the ratio of upper case characters among all characters in a written sample. As Figure 3 shows, capitalization ratios vary between languages, with shape embeddings tending to be more beneficial in languages with higher ratios. By thresholding on the capitalization ratio, we only add shape embeddings for languages with a high ratio (+someshape). This leads to an overall higher average F1 score of 85.3 among monolingual models, due to improved performance (81.9 vs. 81.5) on low-resource languages. One NER model for 265 languages. The reduction in vocabulary size achieved by BPE is a crucial advantage in neural machine translation (Johnson et al., 2017) and other tasks which involve the costly operation of taking a softmax over the entire output vocabulary (see Morin and Bengio, 2005; Li et al., 2019). BPE vocabulary sizes between 8k and 64k are common in neural machine translation. Multilingual BERT operates on a subword vocabulary of size 100k which is shared among 104 languages. Even with shared symbols among languages, this allots at best only a few thousand byte-pair symbols to each language. Given that sequence tagging does not involve taking a softmax over the vocabulary, much larger vocabulary sizes are feasible, and as §3.2 shows, a larger BPE vocabulary is better when enough training data is available. To study the effect of a large BPE vocabulary size in a multilingual setting, we train BPE models and byte-pair embeddings with subword vocabularies of up to 1000k BPE symbols, which are shared among all languages in our evaluation.10 The shared BPE vocabulary and corresponding byte-pair embeddings allow training a single NER model for all 265 languages. To do so, we first encode WikiAnn in all languages using the shared BPE vocabulary and then train a single multilingual NER model in the same fashion as a monolingual model. As the vocabulary size has a large effect on the distribution of BPE symbol lengths (Figure 4, also see §3.2) and model quality, we determine this hyper-parameter empirically (Table 4). To reduce the disparity between dataset sizes of different languages, and to keep training time short, we limit training data to a maximum of 3000 instances per language.11 Results for this multilingual model (MultiBPEmb) with shared character embeddings (+char) and without further finetuning -finetune show a strong improvement in low-resource languages (89.7 vs. 81.9 with +someshape), while performance degrades drastically on high-resource languages. Since the 188 low-resource languages in WikiAnn are typologically and genealogically diverse, the improvement suggests that low-resource languages not only profit from cross-lingual transfer from similar languages (Cotterell and Heigold, 2017), but that multilingual training brings other benefits, as well. In multilingual training, certain aspects of the task at hand, such as tag distribution and BIO constraints have to be learned only once, while they have to be separately learned on each language in monolingual training. Furthermore, multilingual training may prevent overfitting to biases in small monolingual datasets, such as a skewed tag distri10Specifically, we extract up to 500k randomly selected paragraphs from articles in each Wikipedia edition, yielding 16GB of text in 265 languages. Then, we train BPE models with vocabulary sizes 100k, 320k, and 1000k using SentencePiece (Kudo and Richardson, 2018), and finally train 300d subword embeddings using GloVe. 11With this limit, training takes about a week on one NVIDIA P40 GPU. 279 Figure 5: Shared multilingual byte-pair embedding space pretrained (left) and after NER model training (right), 2-d UMAP projection (McInnes et al., 2018). As there is no 1-to-1 correspondence between BPE symbols and languages in a shared multilingual vocabulary, it is not possible to color BPE symbols by language. Instead, we color symbols by Unicode code point. This yields a coloring in which, for example, BPE symbols consisting of characters from the Latin alphabet are green (large cluster in the center), symbols in Cyrillic script blue (large cluster at 11 o’clock), and symbols in Arabic script purple (cluster at 5 o’clock). Best viewed in color. BPEmb MultiBPEmb BERT Languages Pan17 FastText +char +char+finetune BERT +char +char+BPEmb All ∩BERT (101) 88.1 85.6 91.6 93.2 90.3 90.9 92.0 Low-res. ∩BERT (27) 83.6 81.3 85.1 91.1 85.4 85.6 87.1 Med-res. ∩BERT (45) 90.1 88.2 94.2 95.1 93.1 93.7 94.6 High-res. ∩BERT (29) 89.2 85.6 93.6 92.2 90.4 91.4 92.4 Table 5: NER F1 scores for the 101 WikiAnn languages supported by all evaluated methods. butions. A visualization of the multilingual subword embedding space (Figure 5) gives evidence for this view. Before training, distinct clusters of subword embeddings from the same language are visible. After training, some of these clusters are more spread out and show more overlap, which indicates that some embeddings from different languages appear to have moved “closer together”, as one would expect embeddings of semanticallyrelated words to do. However, the overall structure of the embedding space remains largely unchanged. The model maintains language-specific subspaces and does not appear to create an interlingual semantic space which could facilitate cross-lingual transfer. Having trained a multilingual model on all languages, we can further train this model on a single language (Table 3, +finetune). This finetuning further improves performance, giving the best overall score (91.4) and an 8.8 point improvement over Pan et al. on low-resource languages (90.4 vs. 81.6). These results show that multilingual training followed by monolingual finetuning is an effective method for low-resource sequence tagging. 3.4 NER with Multilingual BERT Table 5 shows NER results on the intersection of languages supported by all methods in our evaluation. As in §3.3, FastText performs worst overall, monolingual BPEmb with character embeddings performs best on high-resource languages (93.6 F1), and multilingual BPEmb best on lowresource languages (91.1). Multilingual BERT outperforms the Pan17 baseline and shows strong results in comparison to monolingual BPEmb. The combination of multilingual BERT, monolingual BPEmb, and character embeddings is best overall (92.0) among models trained only on monolingual NER data. However, this ensemble of contextual and non-contextual subword embeddings is inferior to MultiBPEmb (93.2), which was first trained on multilingual data from all languages collectively, and then separately finetuned to each language. Score distributions and detailed NER results for each language and method are shown in Appendix E and Appendix F. 280 BPEmb BERT MultiBPEmb+char Lang. BiLSTM Adv. FastText BPEmb +char +shape BERT +char +char+BPemb -finetune +finetune Avg. 96.4 96.6 95.6 95.2 96.4 95.7 95.6 96.3 96.8 96.1 96.6 bg 98.0 98.5 97.7 97.8 98.5 97.9 98.0 98.5 98.7 98.6 98.7 cs 98.2 98.8 98.3 98.5 98.9 98.7 98.4 98.8 99.0 97.9 98.9 da 96.4 96.7 95.3 94.9 96.4 95.9 95.8 96.3 97.2 94.4 97.0 de 93.4 94.4 90.8 92.7 93.8 93.5 93.7 93.8 94.4 93.6 94.0 en 95.2 95.8 94.3 94.2 95.5 94.9 95.0 95.5 96.1 95.2 95.6 es 95.7 96.4 96.3 96.1 96.6 96.0 96.1 96.3 96.8 96.4 96.5 eu 95.5 94.7 94.6 94.3 96.1 94.8 93.4 95.0 96.0 95.3 95.6 fa 97.5 97.5 97.1 95.9 97.0 96.0 95.7 96.5 97.3 97.0 97.1 fi 95.8 95.4 92.8 92.8 94.4 93.5 92.1 93.8 94.3 92.2 94.6 fr 96.1 96.6 96.0 95.5 96.1 95.8 96.1 96.5 96.5 96.2 96.2 he 97.0 97.4 97.0 96.3 96.8 96.0 96.5 96.8 97.3 96.5 96.6 hi 97.1 97.2 97.1 96.9 97.2 96.9 96.3 96.8 97.4 97.0 97.0 hr 96.8 96.3 95.5 93.6 95.4 94.5 96.2 96.6 96.8 96.4 96.8 id 93.4 94.0 91.9 90.7 93.4 93.0 92.2 93.0 93.5 93.0 93.4 it 98.0 98.1 97.4 97.0 97.8 97.3 97.5 97.9 98.0 97.9 98.1 nl 93.3 93.1 90.0 91.7 93.2 92.5 91.5 92.6 93.3 93.3 93.8 no 98.0 98.1 97.4 97.0 98.2 97.8 97.5 98.0 98.5 97.7 98.1 pl 97.6 97.6 96.2 95.8 97.1 96.1 96.5 97.7 97.6 97.2 97.5 pt 97.9 98.1 97.3 96.3 97.7 97.2 97.5 97.8 98.1 97.9 98.2 sl 96.8 98.1 97.1 96.2 97.7 96.8 96.3 97.4 97.9 97.7 98.0 sv 96.7 96.7 96.7 95.3 96.7 95.7 96.2 97.1 97.4 96.7 97.3 Table 6: POS tagging accuracy on high-resource languages in UD 1.2. BPEmb MultiBPEmb Lang. Adv. FastText +char +char+finetune Avg. 91.6 90.4 79.3 92.4 el 98.2 97.2 96.5 97.9 et 91.3 89.5 82.1 92.8 ga 91.1 89.2 81.6 91.0 hu 94.0 92.9 83.1 94.0 ro 91.5 88.6 73.9 89.7 ta 83.2 85.2 58.7 88.7 Table 7: POS tagging accuracy on low-resource languages in UD 1.2. 3.5 POS Tagging in 27 Languages We perform POS tagging experiments in the 21 high-resource (Table 6) and 6 low-resource languages (Table 7) from the Universal Dependencies (UD) treebanks on which Yasunaga et al. (2018) report state-of-the-art results via adversarial training (Adv.). In high-resource POS tagging, we also compare to the BiLSTM by Plank et al. (2016). While differences between methods are less pronounced than for NER, we observe similar patterns. On average, the combination of multilingual BERT, monolingual BPEmb, and character embeddings is best for high-resource languages and outperforms Adv. by 0.2 percent (96.8 vs. 96.6). For low-resource languages, multilingual BPEmb with character embeddings and finetuning is the best method, yielding an average improvement of 0.8 percent over Adv. (92.4 vs. 91.6). 4 Limitations and Conclusions Limitations. While extensive, our evaluation is not without limitations. Throughout this study, we have used a Wikipedia edition in a given language as a sample of that language. The degree to which this sample is representative varies, and low-resource Wikipedias in particular contain large fractions of “foreign” text and noise, which propagates into embeddings and datasets. Our evaluation did not include other subword representations, most notably ELMo (Peters et al., 2018) and contextual string embeddings (Akbik et al., 2018), since, even though they are languageagnostic in principle, pretrained models are only available in a few languages. Conclusions. We have presented a large-scale study of contextual and non-contextual subword embeddings, in which we trained monolingual and multilingual NER models in 265 languages and POS-tagging models in 27 languages. BPE vocabulary size has a large effect on model quality, both in monolingual settings and with a large vocabulary shared among 265 languages. As a rule of thumb, a smaller vocabulary size is better for small datasets and larger vocabulary sizes better for larger datasets. Large improvements over monolingual training showed that low-resource languages benefit from multilingual model training with shared subword embeddings. Such improvements are likely not solely caused by cross281 lingual transfer, but also by the prevention of overfitting and mitigation of noise in small monolingual datasets. Monolingual finetuning of a multilingual model improves performance in almost all cases (compare -finetune and +finetune columns in Table 9 in Appendix F). For high-resource languages, we found that monolingual embeddings and monolingual training perform better than multilingual approaches with a shared vocabulary. This is likely due to the fact that a high-resource language provides large background corpora for learning good embeddings of a large vocabulary and also provides so much training data for the task at hand that little additional information can be gained from training data in other languages. Our experiments also show that even a large multilingual contextual model like BERT benefits from character embeddings and additional monolingual embeddings. Finally, and while asking the reader to bear above limitations in mind, we make the following practical recommendations for multilingual sequence tagging with subword representations: • Choose the largest feasible subword vocabulary size when a large amount of data is available. • Choose smaller subword vocabulary sizes in low-resource settings. • Multilingual BERT is a robust choice across tasks and languages if the computational requirements can be met. • With limited computational resources, use small monolingual, non-contextual representations, such as BPEmb combined with character embeddings. • Combine different subword representations for better results. • In low-resource scenarios, first perform multilingual pretraining with a shared subword vocabulary, then finetune to the language of interest. 5 Acknowledgements We thank the anonymous reviewers for insightful comments. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany, and partially funded by the German Research Foundation as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Emily M Bender. 2011. On achieving and evaluating language-independence in NLP. Linguistic Issues in Language Technology, 6(3):1–26. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. John A Bullinaria and Joseph P Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior research methods, 39(3):510–526. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357–370. Michael Collins. 2002. Ranking algorithms for named entity extraction: Boosting and the VotedPerceptron. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Ryan Cotterell and Georg Heigold. 2017. Crosslingual character-level neural morphological tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 748–759. Association for Computational Linguistics. James Curran and Stephen Clark. 2003. Language independent NER using a maximum entropy tagger. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. 282 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 363–370. Association for Computational Linguistics. Alex Graves. 2012. Supervised sequence labelling with recurrent neural networks. Ph.D. thesis, Technical University of Munich. Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free pre-trained subword embeddings in 275 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339. Association for Computational Linguistics. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71. Association for Computational Linguistics. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, Williamstown, Mass., 28 June – 1 July 2001, pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Efficient contextual representation learning without softmax layer. CoRR. Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1054–1063. Association for Computational Linguistics. Minh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104– 113. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Association for Computational Linguistics. Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. 2018. Umap: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61. Association for Computational Linguistics. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In AISTATS, volume 5, pages 246–252. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3–26. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and 283 Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA). Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958. Association for Computational Linguistics. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Autodiff Workshop, NIPS 2017. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2089–2096, Istanbul, Turkey. European Language Resources Association (ELRA). Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412–418. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. Prajit Ramachandran, Peter Liu, and Quoc Le. 2017. Unsupervised pretraining for sequence to sequence learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 383–391. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338–348. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Michihiro Yasunaga, Jungo Kasai, and Dragomir Radev. 2018. Robust multilingual part-of-speech tagging via adversarial training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 976–986. Association for Computational Linguistics. 284 A Analysis of NER tag distribution and baseline performance in WikiAnn 1 50 100 150 200 250 265 Tag entropy rank 100 102 104 106 Dataset size (log scale) 1 50 100 150 200 250 265 0 20 40 60 80 100 Pan17 NER F1 (%) 1 50 100 150 200 250 265 0.0 0.2 0.4 0.6 0.8 1.0 Tag relative frequency O I-PER B-PER I-ORG B-ORG I-LOC B-LOC Figure 6: WikiAnn named entity tag distribution for each language (top) in comparison to Pan et al. NER F1 scores (middle) and each language’s dataset size (bottom). Languages are sorted from left to right from highest to lowest tag distribution entropy. That is, the NER tags in WikiAnn for the language in question are well-balanced for higher-ranked languages on the left and become more skewed for lower-ranked languages towards the right. Pan et al. achieve NER F1 scores up to 100 percent on some languages, which can be explained by the highly skewed, i.e. low-entropy, tag distribution in these languages (compare F1 scores >99% in middle subfigure with skewed tag distributions in top subfigure). Better balance, i.e. higher entropy, of tag distribution tends to be found in languages for which WikiAnn provides more data (compare top and bottom subfigures). 285 B BPE and character-ngrams are not language-independent Some methods proposed in NLP are unjustifiedly claimed to be language-independent (Bender, 2011). Subword segmentation with BPE or character-ngrams is language-agnostic, i.e., such a segmentation can be applied to any sequence of symbols, regardless of the language or meaning of these symbols. However, BPE and characterngrams are based on the assumption that meaningful subwords consist of adjacent characters, such as the suffix -ed indicating past tense in English or the copular negation nai in Japanese. This assumption does not hold in languages with nonconcatenative morphology. For example, Semitic roots in languages such as Arabic and Hebrew are patterns of discontinuous sequences of consonants which form words by insertion of vowels and other consonants. For instance, words related to writing are derived from the root k-t-b: kataba “he wrote” or kitab “book”. BPE and characterngrams are not suited to efficiently capture such patterns of non-adjacent characters, and hence are not language-independent. C Procedure for selecting the best BPE vocabulary size We determine the best BPE vocabulary size for each language according to the following procedure. 1. For each language l in the set of all languages L and each BPE vocabulary size v ∈V , run n-fold cross-validation with each fold comprising a random split into training, development, and test set.12 2. Find the best BPE vocabulary size vl for each language, according to the mean evaluation score on the development set of each crossvalidation fold. 3. Determine the dataset size, measured in number of instances Nl, for each language. 4. For each vocabulary size v, compute the median number of training instances of the languages for which v gives the maximum evaluation score on the development set, i.e. e Nv = median({Nl|v = vl∀l ∈L}). 12V = {1000, 3000, 5000, 10000, 25000, 50000, 100000} in our experiments. 5. Given a language with dataset size Nl, the best BPE vocabulary size ˆvl is the one whose e Nv is closest to Nl: ˆvl = argmin v∈V Nl −e Nv 286 D Sequence Tagging Model Hyper-Parameters Task Subword method Hyper-parameter NER POS FastText Embedding dim. 300 300 Encoder biLSTM biLSTM Encoder layer size 256 256 Encoder layers 2 2 Dropout 0.5 0.2 Meta-LSTM layer size 256 256 Meta-LSTM layers 2 2 BPEmb Embedding dim. 100 100 Encoder biLSTM biLSTM Encoder layer size 256 256 Encoder layers 2 2 Dropout 0.5 0.2 Char. embedding dim. 50 50 Char. RNN layer size 256 256 Shape embedding dim. 50 50 Shape RNN layer size 256 256 Meta-LSTM layer size 256 256 Meta-LSTM layers 2 2 MultiBPEmb Embedding dim. 300 300 Encoder biLSTM biLSTM Encoder layer size 1024 1024 Encoder layers 2 2 Dropout 0.4 0.2 Char. embedding dim. 100 100 Char. RNN layer size 512 512 Meta-LSTM layer size 1024 1024 Meta-LSTM layers 2 2 BERT Embedding dim. 768 768 Encoder Transformer Transformer Encoder layer size 768 768 Encoder layers 12 12 Dropout 0.2 0.2 Char. embedding dim. 50 50 Char. RNN layer size 256 256 Meta-LSTM layer size 256 256 Meta-LSTM layers 2 2 Table 8: Hyper-parameters used in our experiments. 287 E NER score distributions on WikiAnn 50 100 150 200 250 Method performance rank 0.0 0.2 0.4 0.6 0.8 1.0 NER F1 Pan17 FastText BPEmb+char MultiBPEmb+char 20 40 60 80 100 Method performance rank 0.0 0.2 0.4 0.6 0.8 1.0 NER F1 BERT BERT+char+BPEmb BPEmb+char MultiBPEmb+char Figure 7: NER results for the 265 languages represented in Pan et al. (2017), FastText, and BPEmb (top), and the 101 languages constituting the intersection of these methods and BERT (bottom). Per-language F1 scores achieved by each method are sorted in descending order from left to right. The data points at rank 1 show the highest score among all languages achieved by the method in question, rank 2 the second-highest score etc. 288 F Detailed NER Results on WikiAnn BPEmb BERT MultiBPEmb+char Language #inst. Pan17 FastText BPEmb +char +shape BERT +char +char+BPEmb -finetune +finetune ab 474 60.0 76.3 69.2 83.9 77.8 85.4 83.3 ace 3573 81.6 88.2 87.0 89.8 89.2 93.0 93.0 ady 693 92.7 82.2 86.3 90.9 91.9 96.3 96.3 af 14799 85.7 80.6 90.4 90.8 90.4 88.2 89.4 91.0 89.2 92.1 ak 244 86.8 68.9 72.5 89.5 75.8 91.3 94.1 als 7467 85.0 79.2 88.3 89.9 89.9 90.0 92.0 am 1032 84.7 35.8 62.1 66.8 67.2 75.7 76.3 an 12719 93.0 82.7 94.1 93.9 94.7 95.1 95.9 96.6 94.4 97.0 ang 3848 84.0 75.2 79.8 78.4 80.4 84.8 84.7 ar 164180 88.3 93.4 93.1 93.7 93.1 88.7 91.0 93.0 79.4 93.2 arc 1618 68.5 65.8 78.7 79.5 76.2 84.1 85.6 arz 3256 77.8 81.7 78.0 78.8 76.5 85.7 85.7 as 1338 89.6 93.5 87.5 87.3 86.1 90.7 90.9 ast 5598 89.2 82.1 89.8 89.5 90.3 91.2 92.1 92.4 94.6 94.9 av 1330 82.0 72.9 78.2 77.6 78.2 85.5 85.6 ay 7156 88.5 86.5 97.3 97.1 95.7 97.8 97.6 az 19451 85.1 77.5 89.7 89.5 88.7 88.8 89.5 90.3 85.0 90.8 azb 2567 88.4 92.3 87.5 89.0 88.1 90.0 89.2 88.8 93.2 93.9 ba 11383 93.8 93.4 95.6 96.2 95.9 96.0 95.8 96.5 96.5 97.2 bar 17298 97.1 93.7 97.1 97.4 97.6 97.1 97.7 97.7 97.9 98.3 bcl 1047 82.3 75.4 74.0 74.4 74.1 91.2 92.9 be 32163 84.1 84.3 90.7 91.9 91.5 89.2 91.0 92.0 86.9 92.0 bg 121526 65.8 89.4 95.5 95.8 95.7 93.4 94.2 95.7 89.8 95.5 bi 441 88.5 84.5 73.8 79.9 81.6 93.9 93.9 bjn 482 64.7 69.8 67.9 72.3 69.3 83.6 84.0 bm 345 77.3 67.1 63.3 64.0 71.2 79.8 80.8 bn 25898 93.8 96.0 95.9 95.8 95.9 95.3 95.2 96.6 92.2 96.3 bo 2620 70.4 85.0 87.2 87.0 83.6 85.8 86.2 bpy 876 98.3 96.4 95.2 96.8 95.6 97.0 95.2 94.4 97.9 97.9 br 17003 87.0 82.2 90.6 92.1 91.1 89.7 90.6 92.7 89.6 93.1 bs 24191 84.8 80.6 88.1 89.8 89.2 89.6 89.8 90.9 88.0 92.1 bug 13676 99.9 100.0 100.0 100.0 99.9 100.0 100.0 bxr 2389 75.0 73.7 76.6 78.0 79.8 84.9 85.4 ca 222754 90.3 86.1 95.7 96.2 95.9 93.7 94.9 96.1 89.3 95.7 cdo 2127 91.0 72.1 78.7 79.5 75.0 85.1 86.4 ce 29027 99.4 99.3 99.5 99.6 99.5 99.7 99.7 99.7 99.6 99.8 ceb 50218 96.3 98.3 99.0 98.9 99.0 99.3 99.2 99.3 98.4 99.4 ch 146 70.6 40.3 39.7 67.4 60.0 78.8 78.8 chr 527 70.6 65.9 61.4 63.6 69.7 84.0 84.9 chy 405 85.1 77.6 77.3 81.1 75.8 86.2 88.5 ckb 5023 88.1 88.7 88.9 88.7 89.0 90.0 90.2 co 5654 85.4 74.5 86.4 83.9 84.7 91.6 92.3 cr 49 91.8 57.6 40.0 30.8 51.9 90.0 90.0 crh 4308 90.1 88.2 90.6 92.6 91.3 93.0 93.3 cs 265794 94.6 85.7 94.3 95.0 94.7 92.7 93.8 94.3 85.0 94.5 csb 3325 87.0 82.6 83.3 88.0 88.9 88.2 89.7 cu 842 75.5 68.0 74.4 81.8 78.0 87.0 85.6 cv 10825 95.7 95.8 96.6 96.8 96.9 97.6 97.2 97.3 97.2 97.4 cy 26039 90.7 86.1 92.9 93.8 93.6 91.6 92.8 93.0 90.5 94.4 da 95924 87.1 81.1 92.5 93.3 92.9 92.1 92.8 94.2 87.5 93.7 de 1304068 89.0 77.2 94.4 93.0 94.1 88.8 89.6 91.2 80.1 90.6 diq 1255 79.3 67.3 73.5 80.2 77.3 90.6 90.8 dsb 862 84.7 74.9 76.1 76.2 82.0 94.8 96.7 dv 1924 76.2 60.8 76.5 77.7 74.4 86.9 87.3 dz 258 50.0 51.8 88.2 80.5 76.2 93.3 91.4 ee 252 63.2 64.5 54.4 56.9 57.8 87.8 90.5 el 63546 84.6 80.9 92.0 92.3 92.5 89.9 90.8 93.0 84.2 92.8 eo 71700 88.7 84.7 93.7 94.3 94.2 88.1 94.8 es 811048 93.9 89.2 96.2 96.7 96.5 92.5 93.1 93.8 86.6 93.7 et 48322 86.8 81.8 91.9 92.9 92.4 91.0 92.3 93.2 87.1 93.2 eu 89188 82.5 88.7 94.7 95.4 95.1 94.9 95.2 96.2 91.0 96.0 ext 3141 77.8 71.6 78.3 78.8 78.8 85.4 87.4 fa 272266 96.4 97.2 96.9 97.3 96.8 94.7 95.3 96.1 86.7 96.2 ff 154 76.9 52.0 68.2 72.4 76.7 90.9 90.9 fi 237372 93.4 81.5 93.1 93.7 93.2 91.2 92.0 93.1 82.9 92.8 fj 125 75.0 49.8 65.9 52.7 52.4 100.0 100.0 fo 3968 83.6 82.4 85.1 87.7 87.1 92.0 92.2 fr 1095885 93.3 87.2 95.5 95.7 95.5 93.4 93.6 94.2 83.8 92.0 frp 2358 86.2 86.9 86.6 89.6 90.4 93.4 94.7 frr 5266 70.1 79.5 86.7 88.2 88.6 90.1 91.1 fur 2487 84.5 77.1 79.7 78.6 81.4 86.3 88.3 fy 9822 86.6 80.7 89.8 90.8 90.5 88.2 89.3 90.4 91.9 93.0 ga 7569 85.3 77.6 87.3 87.8 86.8 85.5 86.4 86.2 89.1 92.0 gag 6716 89.3 91.2 94.9 96.9 95.3 96.2 97.5 gan 2876 84.9 79.6 87.3 88.1 85.8 91.9 92.0 gd 4906 92.8 81.6 85.5 86.4 87.7 92.4 93.5 gl 43043 87.4 78.7 92.8 93.7 93.1 92.7 93.2 93.9 90.2 94.9 glk 667 59.5 83.8 65.5 73.5 69.4 76.8 80.7 gn 3689 71.2 72.3 82.1 79.9 81.1 83.5 85.4 gom 2192 88.8 93.6 95.8 95.6 95.4 92.7 95.8 got 475 91.7 61.3 62.8 70.2 67.8 81.4 82.6 gu 2895 76.0 79.4 76.8 79.5 78.8 76.6 76.6 83.3 82.9 83.1 gv 980 84.8 73.5 72.5 72.2 77.3 92.5 93.7 ha 489 75.0 85.5 82.9 82.8 81.3 94.7 93.8 289 BPEmb BERT MultiBPEmb+char Language #inst. Pan17 FastText BPEmb +char +shape BERT +char +char+BPEmb -finetune +finetune hak 3732 85.5 80.8 87.0 86.8 85.1 90.0 90.9 haw 1189 88.0 89.9 88.4 92.7 93.9 94.9 95.0 he 106569 79.0 91.6 90.8 91.2 90.6 84.8 88.4 91.3 70.6 88.9 hi 11833 86.9 89.2 89.9 89.4 88.9 84.4 87.3 88.9 88.9 91.8 hif 715 81.1 76.8 71.6 77.2 78.7 95.6 96.1 hr 56235 82.8 80.9 89.5 90.7 90.5 90.3 90.6 92.4 86.5 91.8 hsb 3181 91.5 91.7 88.3 90.4 91.7 95.9 95.8 ht 6166 98.9 99.0 98.8 99.1 98.8 98.6 99.0 98.8 99.6 99.7 hu 253111 95.9 85.3 95.0 95.4 95.2 92.4 93.1 94.4 86.3 94.7 hy 25106 90.4 85.0 93.2 93.6 93.5 92.0 92.7 93.7 89.3 94.4 ia 6672 75.4 79.3 81.3 84.2 84.7 88.5 89.9 id 131671 87.8 85.4 94.5 95.1 94.7 93.3 93.7 94.9 89.3 95.4 ie 1645 88.8 85.6 90.3 90.0 87.4 95.2 95.7 ig 937 74.4 68.9 82.7 83.4 83.6 88.9 89.5 ik 431 94.1 83.1 88.6 89.3 89.2 93.3 93.8 ilo 2511 90.3 80.9 87.6 81.2 86.1 95.8 96.3 io 2979 87.2 86.4 88.1 87.4 90.8 91.1 92.0 92.5 95.4 95.8 is 8978 80.2 75.7 85.6 87.0 87.1 86.8 83.8 87.5 88.4 90.7 it 909085 96.6 89.6 96.1 96.1 96.3 93.8 93.7 94.5 87.1 94.0 iu 447 66.7 68.6 84.0 88.9 86.6 92.8 92.3 ja 4902623 79.2 71.0 67.7 71.9 68.9 67.8 69.0 69.1 47.6 68.4 jbo 1669 92.4 87.9 89.0 90.6 88.7 94.4 94.5 jv 3719 82.6 67.4 83.6 87.3 87.1 87.6 88.1 89.0 92.3 93.2 ka 37500 79.8 89.0 89.5 89.4 88.5 85.3 87.6 89.7 81.4 89.3 kaa 1929 55.2 77.2 78.4 81.3 82.0 88.5 89.4 kab 3004 75.7 79.4 85.8 86.1 86.5 87.9 89.1 kbd 1482 74.9 74.3 81.3 83.7 84.8 90.4 91.6 kg 1379 82.1 93.0 91.8 93.8 95.7 95.4 95.6 ki 1056 97.5 93.6 91.9 93.5 93.3 97.2 97.2 kk 60248 88.3 93.8 97.0 97.5 97.1 97.3 97.3 97.8 95.9 97.6 kl 1403 75.0 86.4 83.6 85.9 88.8 92.9 92.6 km 4036 52.2 51.1 87.1 85.6 85.6 91.2 90.7 kn 3567 60.1 76.0 72.4 77.3 74.5 68.7 71.4 75.1 81.3 80.5 ko 188823 90.6 44.4 91.5 92.1 91.7 86.8 88.4 91.1 72.4 90.6 koi 2798 89.6 90.2 91.2 92.0 92.0 93.0 93.7 krc 1830 84.9 75.6 78.2 82.3 83.4 89.8 89.1 ks 117 75.0 23.4 23.8 40.7 34.1 64.2 64.2 ksh 1138 56.0 44.0 57.6 52.6 60.2 72.4 74.1 ku 2953 83.2 71.1 79.3 81.2 85.2 90.9 91.7 kv 2464 89.7 85.3 83.1 85.0 84.9 93.1 94.1 kw 1587 94.0 90.4 90.4 91.1 92.7 97.1 97.7 ky 2153 71.8 58.6 67.2 69.9 72.9 70.9 72.9 75.3 81.0 82.0 la 77279 90.8 93.1 96.2 97.1 97.0 96.8 97.1 97.3 92.8 97.1 lad 973 92.3 79.5 80.0 82.8 83.0 93.9 94.1 lb 10450 81.5 68.0 87.3 86.9 86.6 86.3 86.4 88.8 86.2 89.7 lbe 631 88.9 81.1 84.4 84.5 86.2 91.8 92.6 lez 3310 84.2 87.6 89.2 90.4 91.2 93.8 94.2 lg 328 98.8 92.0 91.5 91.3 91.0 97.2 97.2 li 4634 89.4 83.4 86.3 90.4 88.0 93.7 94.9 lij 3546 72.3 75.9 79.9 82.2 82.3 87.3 87.5 lmo 13715 98.3 98.6 98.5 98.8 99.0 99.1 99.3 99.3 98.8 99.3 ln 1437 82.8 68.3 74.3 81.3 78.8 87.2 87.4 lo 991 52.8 67.7 70.5 76.6 72.6 86.1 86.8 lrc 372 65.2 70.5 59.3 71.8 66.0 79.8 80.0 lt 60871 86.3 84.1 91.2 92.4 91.4 90.7 91.5 92.7 85.9 92.2 ltg 1036 74.3 78.3 80.6 82.1 82.8 88.8 89.0 lv 44434 92.1 87.6 92.7 94.1 93.9 91.9 93.1 94.2 87.2 94.0 mai 755 99.7 98.1 98.4 98.3 98.4 99.6 100.0 mdf 497 82.2 65.3 71.6 74.9 76.0 84.2 88.4 mg 11181 98.7 99.3 99.4 99.3 99.4 99.4 99.4 99.4 99.1 99.5 mhr 3443 86.7 88.4 89.0 92.2 89.9 94.8 95.3 mi 5980 95.9 92.6 96.2 96.5 96.1 96.4 97.6 min 3626 85.8 84.5 87.9 87.7 88.3 86.8 89.8 91.2 94.3 94.6 mk 29421 93.4 87.4 93.6 94.2 94.0 92.9 92.5 93.7 90.6 94.6 ml 19729 82.4 86.3 84.7 86.2 84.6 79.7 81.5 85.0 77.2 84.2 mn 2511 76.4 71.2 73.1 72.5 77.6 76.8 76.0 79.5 85.9 87.0 mr 14978 82.4 88.0 86.8 87.7 87.1 85.0 85.9 88.0 85.0 89.7 mrj 6036 97.0 96.9 96.8 96.9 97.6 97.7 98.3 ms 67867 86.8 88.0 95.4 95.9 95.4 94.9 95.4 95.9 92.3 96.7 mt 1883 82.3 68.9 77.1 80.1 78.9 84.5 87.0 mwl 2410 76.1 65.1 75.4 73.7 73.4 80.0 80.8 my 1908 51.5 73.3 72.2 72.2 70.5 69.1 72.4 75.6 77.1 76.3 myv 2108 88.6 90.3 86.7 90.3 90.0 92.9 93.2 mzn 2491 86.4 89.2 88.5 87.7 86.6 91.8 92.2 na 1107 87.6 84.7 83.7 88.6 90.0 94.4 95.2 nap 4205 86.9 72.4 81.5 82.1 80.7 87.7 88.7 nds 4798 84.5 78.0 87.4 90.1 89.3 88.6 88.9 89.5 93.2 93.3 ne 1685 81.5 80.2 79.3 75.6 74.2 76.2 77.1 79.7 87.9 87.7 new 10163 98.2 98.6 98.3 98.2 98.3 97.9 98.4 98.3 98.8 99.5 nl 589714 93.2 85.2 94.4 95.5 95.3 92.6 92.5 93.5 86.9 93.5 nn 44228 88.1 85.3 93.6 94.7 94.2 93.3 93.4 94.5 90.6 95.0 no 233037 94.1 86.9 94.8 95.4 95.0 93.2 93.6 95.0 87.0 94.8 nov 3176 77.0 87.2 94.0 94.3 93.5 97.9 98.0 nrm 1281 96.4 89.7 88.1 91.9 92.4 97.9 98.3 nso 720 98.9 98.7 97.2 97.2 97.7 99.2 99.1 nv 2569 90.9 81.7 80.2 83.2 83.0 91.6 90.7 290 BPEmb BERT MultiBPEmb+char Language #inst. Pan17 FastText BPEmb +char +shape BERT +char +char+BPEmb -finetune +finetune ny 156 56.0 46.8 48.0 41.7 40.8 86.1 86.1 oc 16915 92.5 87.7 93.0 93.1 94.6 94.3 94.4 95.2 93.3 96.5 om 631 74.2 67.2 69.9 72.8 75.6 78.8 80.6 or 1362 86.4 75.6 86.6 84.0 82.2 92.5 93.0 os 2155 87.4 81.2 82.4 85.5 84.7 91.4 91.6 pa 1773 74.8 81.9 75.2 72.4 77.7 77.6 74.8 79.0 85.3 84.8 pag 1643 91.2 89.5 87.2 88.6 89.9 91.5 91.2 pam 1072 87.2 78.4 76.8 78.0 84.3 93.1 93.5 pap 1555 88.8 72.7 79.0 76.4 80.7 87.5 87.1 pcd 4591 86.1 86.9 88.1 91.4 90.3 91.4 92.2 pdc 1571 78.1 71.6 75.7 79.7 80.5 84.7 87.0 pfl 1092 42.9 56.6 62.3 65.0 64.9 76.5 78.9 pi 27 83.3 0.0 25.0 15.4 0.0 90.9 90.9 pih 470 87.2 78.5 73.1 76.7 86.0 91.8 91.8 pl 639987 90.0 86.0 94.4 95.0 94.5 91.0 91.4 92.9 84.2 92.6 pms 3809 98.0 95.7 96.4 96.1 96.1 97.0 97.3 97.9 97.9 98.2 pnb 5471 90.8 91.2 90.2 89.8 90.7 91.4 90.1 91.2 90.9 91.7 pnt 291 61.5 70.1 66.2 71.3 73.5 77.2 78.3 ps 6888 66.9 79.2 77.8 77.9 77.4 78.6 79.8 pt 452130 90.7 86.3 95.7 96.0 95.8 92.6 92.8 93.7 86.8 94.3 qu 6480 92.5 90.0 93.2 93.9 93.3 96.0 97.1 rm 6617 82.0 80.3 86.2 87.8 87.1 90.1 91.0 rmy 532 68.5 65.6 80.4 81.3 80.8 93.0 93.0 rn 179 40.0 52.6 65.7 65.2 82.6 94.7 94.7 ro 171314 90.6 87.6 95.7 96.8 95.6 94.8 94.7 95.6 90.4 96.4 ru 1192873 90.1 89.7 95.2 95.4 94.7 91.8 92.0 93.0 85.1 92.2 rue 1583 82.7 78.1 76.0 81.7 84.2 89.1 89.8 rw 1517 95.4 86.2 83.9 89.1 87.6 92.7 93.3 sa 1827 73.9 76.7 78.4 78.7 71.4 80.8 80.6 sah 3442 91.2 89.6 91.5 92.2 91.1 95.0 94.6 sc 917 78.1 74.6 71.9 70.8 76.4 86.9 86.6 scn 5181 93.2 82.6 88.9 91.1 90.7 91.5 91.6 92.4 95.0 95.2 sco 9714 86.8 84.1 88.9 90.7 90.7 89.0 89.8 91.1 90.8 93.2 sd 2186 65.8 80.1 78.7 81.7 75.2 82.0 84.9 se 1256 90.3 92.6 88.6 91.0 91.8 95.7 95.8 sg 245 99.9 71.5 92.0 86.2 93.2 96.0 96.0 sh 1126257 97.8 98.1 99.4 99.5 99.4 98.8 98.9 98.9 98.3 99.1 si 2025 87.7 87.0 80.2 80.3 79.4 85.2 87.3 sk 68845 87.3 83.5 92.4 93.5 93.1 92.9 93.7 94.4 88.5 94.5 sl 54515 89.5 86.2 93.0 94.2 93.8 93.0 94.4 95.1 90.9 95.2 sm 773 80.0 56.0 65.5 70.4 64.2 80.7 81.9 sn 1064 95.0 71.6 79.7 79.3 80.7 89.3 89.7 so 5644 85.8 75.3 82.6 84.5 84.5 88.0 89.3 sq 24602 94.1 85.5 93.2 94.2 94.2 94.3 94.8 95.5 93.3 95.7 sr 331973 95.3 94.3 96.8 97.1 97.1 96.4 96.3 96.8 92.9 96.6 srn 568 76.5 81.9 89.4 90.3 88.2 93.8 94.6 ss 341 69.2 74.1 81.9 77.2 82.6 87.4 88.0 st 339 84.4 78.6 88.2 93.3 91.1 96.6 96.6 stq 1085 70.0 76.6 78.9 77.4 74.1 91.4 91.9 su 960 72.7 53.5 58.8 57.0 66.8 76.4 69.6 68.1 87.3 89.0 sv 1210937 93.6 96.2 98.5 98.8 98.7 97.9 98.0 98.1 96.8 97.8 sw 7589 93.4 85.2 91.0 90.7 90.8 91.0 91.7 91.7 92.8 93.6 szl 2566 82.7 77.9 79.6 82.2 84.1 92.1 93.1 ta 25663 77.9 86.3 84.5 85.7 84.3 75.2 84.2 te 9929 80.5 87.9 87.8 87.5 87.5 80.4 83.7 86.8 83.4 87.5 tet 1051 73.5 79.3 81.1 85.3 84.0 92.8 93.0 tg 4277 88.3 85.4 89.6 89.8 88.8 87.4 88.4 89.3 92.3 94.1 th 230508 56.2 81.0 80.8 81.4 81.6 70.2 78.4 77.6 42.4 77.7 ti 52 94.2 60.2 77.3 49.5 32.9 91.7 91.7 tk 2530 86.3 81.5 82.7 82.8 83.7 89.0 89.8 tl 19109 92.7 79.4 93.9 93.7 93.7 92.8 94.2 94.0 92.2 96.2 tn 750 76.9 72.6 72.3 79.8 81.2 83.6 84.7 to 814 92.3 77.0 67.6 74.9 81.2 86.3 88.2 tpi 1038 83.3 84.7 84.6 86.4 88.5 94.7 95.6 tr 167272 96.9 77.5 94.4 94.9 94.5 92.6 93.1 94.4 86.1 95.1 ts 227 93.3 94.4 78.9 86.3 77.0 91.3 92.2 tt 35174 87.7 96.9 98.4 98.4 98.4 98.4 98.2 98.6 97.7 98.8 tum 815 93.8 95.8 90.7 93.7 93.2 97.6 97.6 tw 491 94.6 91.2 87.5 92.3 94.8 97.9 97.9 ty 1004 86.7 90.8 97.2 94.3 96.0 95.4 95.6 tyv 842 91.1 70.3 73.4 67.2 65.0 84.6 84.5 udm 840 88.9 83.4 85.6 85.6 83.6 95.6 96.6 ug 1998 79.7 84.6 83.2 82.0 80.0 87.1 87.4 uk 319693 91.5 91.2 95.6 96.0 95.8 92.1 92.5 93.7 88.9 94.9 ur 74841 96.4 96.9 97.0 97.1 97.0 95.6 96.6 97.1 91.0 97.3 uz 91284 98.3 97.9 99.0 99.3 99.2 99.2 99.3 99.3 97.6 99.3 ve 141 99.9 31.8 21.0 58.6 73.0 89.2 89.2 vec 1861 87.9 78.3 80.3 84.8 82.7 92.9 93.0 vep 2406 85.8 87.1 88.8 89.0 89.3 92.0 93.2 vi 110535 89.6 88.1 93.4 94.1 93.8 92.5 93.4 94.4 85.2 94.8 vls 1683 78.2 70.7 78.2 78.7 78.7 83.8 84.5 vo 46876 98.5 98.3 99.1 99.5 99.3 98.7 99.1 99.2 97.4 99.7 wa 5503 81.6 78.9 84.6 83.7 84.4 87.1 87.0 war 11748 94.9 93.3 95.4 95.5 95.9 96.3 96.1 95.7 96.1 97.8 wo 1196 87.7 82.3 79.1 79.4 78.5 84.6 86.5 wuu 5683 79.7 67.5 87.0 87.6 86.7 91.5 92.5 291 BPEmb BERT MultiBPEmb+char Language #inst. Pan17 FastText BPEmb +char +shape BERT +char +char+BPEmb -finetune +finetune xal 1005 98.7 98.4 95.8 95.6 95.9 99.3 98.9 xh 134 35.3 15.8 32.3 26.4 35.0 82.1 82.1 xmf 1389 73.4 85.0 77.9 78.7 77.7 87.9 87.7 yi 2124 76.9 78.4 75.1 73.2 74.1 80.2 81.3 yo 3438 94.0 87.5 91.1 92.1 92.5 94.1 93.3 94.1 96.3 97.0 za 345 57.1 66.1 67.7 67.1 68.4 87.0 88.9 zea 7163 86.8 88.1 91.2 92.5 91.9 93.7 95.4 zh 1763819 82.0 78.7 78.6 80.4 78.2 77.2 78.5 79.2 58.3 76.6 zu 425 82.3 61.5 61.0 70.7 70.3 79.6 80.4 Table 9: Per-language NER F1 scores on WikiAnn.
2019
27
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2809–2818 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2809 Joint Effects of Context and User History for Predicting Online Conversation Re-entries Xingshan Zeng1,2, Jing Li3∗, Lu Wang4, Kam-Fai Wong1,2 1The Chinese University of Hong Kong, Hong Kong 2MoE Key Laboratory of High Confidence Software Technologies, China 3Tencent AI Lab, Shenzhen, China 4Northeastern University, Boston, MA, United States 1,2{xszeng,kfwong}@se.cuhk.edu.hk [email protected], [email protected] Abstract As the online world continues its exponential growth, interpersonal communication has come to play an increasingly central role in opinion formation and change. In order to help users better engage with each other online, we study a challenging problem of re-entry prediction foreseeing whether a user will come back to a conversation they once participated in. We hypothesize that both the context of the ongoing conversations and the users’ previous chatting history will affect their continued interests in future engagement. Specifically, we propose a neural framework with three main layers, each modeling context, user history, and interactions between them, to explore how the conversation context and user chatting history jointly result in their re-entry behavior. We experiment with two large-scale datasets collected from Twitter and Reddit. Results show that our proposed framework with biattention achieves an F1 score of 61.1 on Twitter conversations, outperforming the state-ofthe-art methods from previous work. 1 Introduction Interpersonal communication plays an important role in information exchange and idea sharing in our daily life. We are involved in a wide variety of dialogues every day, ranging from kitchen table conversations to online discussions, all help us make decisions, better understand important social issues, and form personal ideology. However, individuals have limited attentions to engage in the massive amounts of online conversations. There thus exists a pressing need to develop automatic conversation management tools to keep track of the discussions one would like to keep engaging in. To meet such demand, we study the problem of predicting online conversation re-entries, ∗Jing Li is the corresponding author. …… H1: Is there literally no one on twitter who wants to talk about LET ME IN with me? :( H2: I think the change in overall tone was enough to let LMI stand on it's own. Love Giacchino's score too. H3: I think if i had seen LMI again before making my top ten it would have made the cut. Oh well. H4: it's not as bad as I remembered on the blu-ray. Looks like shit next to Avatar, but so does everything lol …… User History of U1 Conversation 1 Conversation 2 T1[U2]: Instead of focusing on when Oscars got it wrong... Let's talk about when the Oscars got it right… T2[U1]: The Hurt Locker, The Departed, NCFOM, LOTR, Schindler's List, Braveheart, Gladiator, The Godfather Part 1 & 2. ...... …… T1[U3]: Almost fell asleep in the first hour of INCEPTION. In the theatre. T2[U4]: lol do you not like it? T3[U5]: Meh. MEMENTO = far better film. T4[U1]: apples and oranges, plain and simple. …… T5[U1]: Inception and Memento. Same filmmaker, but completely different scope, themes, ideas, genres, etc. Figure 1: Sample tweets in the chatting history of user U1 and two Twitter conversation snippets U1 engaged in. Hi: the i-th tweet in U1’s history. Ti[Uj]: the i-th turn posted by Uj. First entries by U1 are highlighted in blue in both conversations. U1 only returns to the second one. where we aim to forecast whether the users will return to a discussion they once entered. What will draw a user back? To date, prior efforts for re-entry prediction mainly focus on modeling users engagement patterns in the ongoing conversations (Backstrom et al., 2013) or rely on the social network structure (Budak and Agrawal, 2013), largely ignoring the rich information in users’ previous chatting history. Here we argue that effective prediction of one’s re-entry behavior requires the understanding of both the conversation context—what has been discussed in the dialogue under consideration, and user chatting history (henceforth user history)— what conversation topics the users are actively involved in. In Figure 1, we illustrate how the two factors together affect a user’s re-entry behavior. Along with two conversations that user U1 participated in, also shown is their chatting history in previous discussions. U1 comes back to the second conversation since it involves topics on movies (e.g. mentioning Memento and Inception) and thus suits their interests according to the chatting his2810 tory, which also talked about movies. In this work, we would like to focus on the joint effects of conversation context and user history, ignoring other information. It would be a more challenging yet general task, since information like social networks may be not available in some certain scenarios. To study how conversation context and user history jointly affect user re-entries, we propose a novel neural framework that incorporates and aligns the indicative representations from the two information source. To exploit the joint effects, four mechanisms are employed here: simple concatenation of the two types of representation, attention mechanism over turns in context, memory networks (Sukhbaatar et al., 2015) — able to learn context attentions in aware of user history, and bi-attention (Seo et al., 2016) — further capturing interactions from two directions (context to history and history to context). More importantly, our framework enables the re-entry prediction and corresponding representations to be learned in an end-to-end manner. On the contrary, previous methods for the same task rely on handcrafted features (Backstrom et al., 2013; Budak and Agrawal, 2013), which often require laborintensive and time-consuming feature engineering processes. To the best of our knowledge, we are the first to explore the joint effect of conversation context and user history on predicting re-entry behavior in a neural network framework. We experiment with two large-scale datasets, one from Twitter (Zeng et al., 2018), the other from Reddit which is newly collected1. Our framework with bi-attention significantly outperforms all the comparing methods including the previous state of the art (Backstrom et al., 2013). For instance, our model achieves an F1 score of 61.1 on Twitter conversations, compared to an F1 score of 57.0 produced by Backstrom et al. (2013), which is based on a rich set of handcrafted features. Further experiments also show that the model with bi-attention can consistently outperform comparisons given varying lengths of conversation context. It shows that bi-attention mechanism can well align users’ personal interests and conversation context in varying scenarios. After probing into the proposed neural framework with bi-attention, we find that meaningful representations are learned via exploring the joint 1The datasets and codes are released at: https:// github.com/zxshamson/re-entry-prediction effect of conversation context and user history, which explains the effectiveness of our framework in predicting re-entry behavior. Finally, we carry out a human study, where we ask two humans to perform on the same task of first re-entry prediction. The model with bi-attention outperforms both humans, suggesting the difficulty of the task as well as the effectiveness of our proposed framework. 2 Related Work Response Prediction. Previous work on response prediction mainly focuses on predicting whether users will respond to a given social media post or thread. Efforts have been made to measure the popularity of a social media post via modeling the response patterns in replies or retweets (Artzi et al., 2012; Zhang et al., 2015). Some studies investigate post recommendation by predicting whether a response will be made by a given user (Chen et al., 2012; Yan et al., 2012; Hong et al., 2013; Alawad et al., 2016). In addition to post-level prediction, other studies focus on response prediction at the conversation-level. Zeng et al. (2018) investigate microblog conversation recommendation by exploiting latent factors of topics and discourse with a Bayesian model, which often requires domain expertise for customized learning algorithms. Our neural framework can automatically acquire the interactions among important components that contribute to the re-entry prediction problem, and can be easily adapted to new domains. For the prediction of re-entry behavior in online conversations, previous methods rely on the extraction of manually-crafted features from both the conversation context and the user’s social network (Backstrom et al., 2013; Budak and Agrawal, 2013). Here we tackle a more challenging task, where the re-entries are predicted without using any information from social network structure, which ensures the generalizability of our framework to scenarios where such information is unavailable. Online Conversation Behavior Understanding. Our work is also in line with conversational behavior understanding, including how users interact in online discourse (Ritter et al., 2010) and how such behavior signals the future trajectory, including their continued engagement (Backstrom et al., 2013; Jiao et al., 2018) and the appearance of impolite behavior (Zhang et al., 2018). To 2811 t1 a1 Turn Encoder LSTM . . . . . . . . . m1 Turn Encoder m|u| Turn Encoder . . . . . . . . . . . . . . . . . . Interaction Modeling Layer for Combining Context and User History rO Linear + Sigmoid Context Modeling Layer User History Modeling Layer Embedding H" # LSTM LSTM t|c| a|c| H|%| # H" & H|'| & ((*, ,) Turn Encoder Embedding Embedding Embedding Figure 2: The generic framework for re-entry prediction. We implement it with three encoders (Average Embedding, CNN, and BiLSTM) for turn modeling and four mechanisms (Simple Concatenation, Attention, Memory Networks, and Bi-attention) for modeling interactions between context and user history. better understand the structure of conversations, Recurrent Neural Network (RNN)-based methods have been exploited to capture temporal dynamics (Cheng et al., 2017; Zayats and Ostendorf, 2018; Jiao et al., 2018). Different from the above work, our model not only utilizes the conversations themselves, but also leverages users’ prior posts in other discussions. 3 Neural Re-entry Prediction Combining Context and User History This section describes our neural network-based conversation re-entry prediction framework exploring the joint effects of context and user history. Figure 2 shows the overall architecture of our framework, consisting of three main layers: context modeling layer, user history modeling layer, and interaction modeling layer to learn how information captured by the previous two layers interact with each other and make decisions conditioned on their joint effects. Here we adopt four mechanisms for interaction modeling: simple concatenation, attention, memory networks, and biattention, which will be described later. 3.1 Input and Output We start with formulating model input and output. At input layer, our model is fed with two types of information, the chatting history of the target user u and the observed context of the target conversation c. The goal of our model is to output a Bernoulli distribution p(u, c) indicating the estimated likelihood of whether u will re-engage in the conversation c. Below gives more details. Formally, we formulate the context of c as a sequence of chronologically ordered turns ⟨t1, t2, · · · , t|c|⟩, where the last turn t|c| is posted by u (we then predict u’s re-entries afterwards). Each turn t is represented by a sequence of words wt, and an auxiliary triple, at = ⟨it, rt, ut⟩, where it, rt, and ut are three indexes indicating the position of turn t, which turn t replies to, and the author of t, respectively. Here at is used to record the replying structures as well as the user’s involvement pattern. For the user history, we formulate it as a collection of u’s chatting messages {m1, m2, · · · , m|u|}, all posted before the time t|c| occurs. Each message m is denoted as its word sequence, wm. In the following, we explain how the aforementioned representations are processed by our model to make predictions. The three main layers in Figure 2 are described in Sections 3.2, 3.3, and 3.4, respectively. The learning objective is presented in Section 3.5. 3.2 Context Modeling Layer The context modeling layer captures representations from the observed context for the target conversation c. To this end, we jointly model the content in each turn (henceforth turn modeling) and the turn interactions in conversation structure (henceforth structure modeling). Turn Modeling. The turn representations are modeled via turn-level word sequence with a turn encoder. We exploit three encoders here: Average Embedding (Averaging each word’s embedding representation), CNN (Convolutional Neural Networks), and BiLSTM (Bidirectional Long Short-Term Memory). BiLSTM’s empirical performance turns out to be slightly better (will be reported in Table 2). Concretely, given the conversation turn t, each word wi of t is represented as a vector mapped by an embedding layer I(·), which is initialized by pre-trained embeddings and updated during training. The embedded vector I(wi) is then fed into the turn encoder, yielding the turn representation for t, denoted by HT t .2 2For all the BiLSTM encoders in this work, without otherwise specified, we take the concatenation of all hidden states from both the directions as its learned representations. 2812 Structure Modeling. To learn the conversational structure representations for c, our model applies BiLSTM, namely structure encoder, to capture the interactions between adjacent turns in its context. Each state of this structure encoder sequentially takes t’s turn representation, HT t , concatenated with the auxiliary triple, at, as input to produce the structure representation HC. Our intuition is that HC should capture both the content of the conversation and interaction patterns among its participants. Then HC, considered as the context representation for c, is sent to interaction modeling layer as part of its input. 3.3 User History Modeling Layer To encode the user history for target user u, in this layer, we first apply the same encoder in turn modeling to encode each chatting message m by u, as they both explore the post-level representations. The turn encoder is sequentially fed with the embedded word in m, and produce the messagelevel representation HM m . All messages in u’s user history are further concatenated into a matrix HU, serving as u’s user history representation and the input of the next layer. 3.4 Interaction Modeling Layer To capture whether the discussion points in c match the interests of u, HC (from context modeling) and HU (from user history modeling) are merged through an interaction modeling mechanism over the two sources of information. We hypothesize that users will be likely to come back to a conversation if its topic fits their own interests. Here, we explore four different mechanisms for interaction modeling. Their learned interaction representation, denoted as rO, is fed into a sigmoid-activated neural perceptron (Glorot et al., 2011), for predicting final output p(u, c). It indicates how likely the target user u will re-engage in the target conversation c. We then describe the four mechanisms to learn rO in turn below. Simple Concatenation. Here we simply put context representation (last state) and user representations (with average pooling) side by side, yielding rO = [HC |c|; P|u| j HU j /|u|] as the interaction representation for re-entry prediction. Attention. To capture the context information useful for re-entry prediction, we exploit an attention mechanism (Luong et al., 2015) over HC. Attentions are employed to “soft-address” important context turns according to their similarity with user representation (with average pooling). Here we adopt dot attention weights and define the attended interaction representation as: rO = |c| X i αi ·HC i , αi = softmax(HC i · |u| X j HU j /|u|) (1) Memory Networks. To further recognize indicative chatting messages in user history, we also apply end-to-end memory networks (MemN2N) (Sukhbaatar et al., 2015) for interaction modeling. It can be seen as a recurrent attention mechanism over chatting messages (stored in memory). Hence fed with context representation, memory networks will yield a memory-aware vector as interaction representation: rO = |u| X j αj ·fturn(HU j ), αj = softmax(HC |c| ·HU j ) (2) where fturn(·) denotes the unit function used for turn modeling. Here we adopt multi-hop memory mechanism to allow deep user interests to be learned from chatting history. For more details, we refer the readers to Sukhbaatar et al. (2015). Bi-attention. Inspired by Seo et al. (2016), we also apply bi-attention mechanism to explore the joint effects of context and user history. Intuitively, the bi-attention mechanism looks for evidence, if any, indicating the topics of the current conversation that align with the user’s interests from two directions (i.e. context to history and history to context), such as the names of two movies Inception and Let Me In shown in Figure 1. Concretely, bi-attention mechanism captures contextaware attention over user history messages: αU ij = exp(fscore(HC i , HU j )) P|u| j′=1 exp(fscore(HC i , HU j′)) (3) where the alignment score function takes a form of fscore(HC i , HU j ) = Wbi−att[HC i ; HU j ; HC i ◦HU j ]. It captures the similarity of the i-th context turn and the j-th user history message. The weight vector Wbi−att is learnable in training. Likewise, we compute user-aware attention over context turns. Afterwards, the bi-directional attended representations are concatenated and passed into a ReLU-activated multilayer perceptron (MLP), yielding representation r. r, as turnlevel representation, is then sequentially fed into a two-layer BiLSTM, to produce the interaction representation rO. 2813 3.5 Learning Objective For parameter learning in our model, we design the objective function based on cross-entropy loss as following: L = − X i  λyi log(ˆyi) + µ(1 −yi) log(1 −ˆyi)  (4) where the two terms reflect the prediction on positive and negative instances, respectively. Moreover, to take the potential data imbalance into account, we adopt two trade-off weights λ and µ. The parameter values are set based on the proportion of positive and negative instances in the training set (see Section 4). ˆyi denotes the re-entry probability estimated from p(u, c) for the i-th instance, and yi is the corresponding binary groundtruth label (1 for re-entry and 0 for the opposite). 4 Experimental Setup Data Collection and Statistic Analysis. To study re-entry behavior in online conversations, we collected two datasets: one is released by Zeng et al. (2018) containing Twitter conversations formed by tweets from the TREC 2011 microblog track data3 (henceforth Twitter), and the other is newly collected from Reddit (henceforth Reddit), a popular online forum. In our datasets, the conversations from Twitter concern diverse topics, while those from Reddit focus on the political issues. Both datasets are in English. To build the Reddit dataset, we first downloaded a large corpus publicly available on Reddit platform.4 Then, we selected posts and comments in subreddit “politics” posted from Jan to Dec 2008. Next, we formed Reddit posts and comments into conversations with replying relations revealed by the “parent id” of each comment. Last, we removed conversations with only one turn. In our main experiment, we focus on first reentry prediction, i.e. we predict whether a user u will come back to a conversation c, given current turns until u’s first entry in c as context and u’s past chatting messages (posted before u engaging in c). For model training and evaluation, we randomly select 80%, 10%, and 10% conversations to form training, development, and test sets. The statistics of the two datasets are shown in Table 1. As can be seen, users participate twice on 3https://trec.nist.gov/data/tweets/ 4https://www.reddit.com/r/datasets/ comments/3bxlg7/i_have_every_publicly_ available_reddit_comment/ Twitter Reddit # of users 10,122 13,134 # of conversations 7,500 29,477 # of re-entry instances 5,875 12,780 # of non re-entry instances 8,677 39,988 Avg. # of convs per user 1.7 5.9 Avg. # of msgs in user history 3.9 8.4 Avg. # of entries per user per conv 2.0 1.3 Avg. # of turns per conv 5.2 3.7 Avg. # of users per conv 2.3 2.6 Table 1: Statistics of two datasets. 2^0 2^4 2^8 2^12 2^16 0 10 20 30 40 50 # of users # of msgs in user history Twitter Reddit > (a) User history 2^0 2^4 2^8 2^12 2^16 2 9 16 23 >30 # of convs # of turns in conv. context Twitter Reddit (b) Conversation context Figure 3: Distributions of message number in user history and turn number in conversation context on the two datasets. average in Twitter conversations, and the number is only 1.3 on Reddit. This results in the severe imbalance over instances of re-entry and non re-entry (negative samples where users do not come back) on both datasets. Therefore, strategies should be adopted for alleviating the data imbalance issue, as done in Eq. (4). It indicates the sparse user activity in conversations, where most users engage in a conversation only once or twice. Thus predicting user re-entries only with context will not perform well, and the complementary information underlying user history should be leveraged. We further study the distributions of message number in user history and turn number in conversation context on both datasets. As shown in Figure 3, there exists severe sparsity in either user history or conversation context. Thus combining them both might help alleviate the sparsity in one information source. We also notice that Twitter and Reddit users exhibit different conversation behaviors. Reddit users tend to engage in more conversations, resulting in more messages in user history (as shown in Figure 3(a)). Twitter users are more likely to stay within each conversation, leading to lengthy discussions and larger re-entry frequencies on average, as shown in Figure 3(b) and Table 1. 2814 Data Preprocessing and Model Setting. For preprocessing Twitter data, we applied Glove tweet preprocessing toolkit (Pennington et al., 2014).5 For the Reddit dataset, we first applied the open source natural language toolkit (NLTK) (Loper and Bird, 2002) for word tokenization. Then, we replaced links with the generic tag “URL” and removed all the nonalphabetic tokens. For both datasets, a vocabulary was built and maintained in experiments with all the tokens (including emoticons and punctuation) from training data. For model setups, we initialize the embedding layer with 200-dimensional Glove embedding (Pennington et al., 2014), where Twitter version is used for our Twitter dataset and the Common Crawl version applied on Reddit dataset.6 All the hyper-parameters are tuned on the development set by grid search. The batch size is set to 32. Adam optimizer (Kingma and Ba, 2014) is adopted for parameter learning with initial learning rate selected among {10−3, 10−4, 10−5}. For the BiLSTM encoders, we set the size of their hidden states to 200 (100 for each direction). For the CNN encoders, we use filter windows of 2, 3, and 4, each with 50 feature maps. In MemN2N interaction mechanism, we set hop numbers to 3. In the learning loss, we set µ = 1 and λ = 2, the weights to tackle data imbalance. For re-entry prediction, a user is considered to come back if the estimated probability for re-entry is larger than 0.5. Baselines and Comparisons. For comparisons, we consider three baselines. RANDOM baseline: randomly pick up a “yes-or-no” answer. HISTORY baseline: predict based on users’ history re-entry rate before current conversation, which will answer “yes” if the rate exceeds a pre-defined threshold (set on development data), and “no” otherwise. (For users who lack such information before current conversation, it predicts “yes or no” randomly.) ALL-YES baseline: always answers “yes” in re-entry prediction. Its assumption is that users tend to be drawn back to the conversations they once participated by the platform’s auto messages inviting them to return. For supervised models, we compare with CCCT, the state-of-the-art method proposed by 5https://nlp.stanford.edu/projects/ glove/preprocess-twitter.rb 6https://nlp.stanford.edu/projects/ glove/ Backstrom et al. (2013), where the bagged decision tree with manually-crafted features (including arrival patterns, timing effects, most related terms, etc.) are employed for re-entry prediction. We do not compare with Budak and Agrawal (2013), since most of its features are related to social networks or Twitter group information, which is unavailable in our data. In our proposed neural framework, we further compare varying encoders for turn modeling and mechanisms to model the interactions between user history and conversation context. We first compare three turn encoders — AVG-EMBED (average embedding), CNN, and BILSTM, to examine their performance in turn representation learning. Their results are compared on our variant only with context modeling layer and the best encoder (turned out to be BILSTM) is applied on the full model. For the interaction modeling layer, we also study the effectiveness of four mechanisms to combine user history and conversation context — simple concatenation (CON), attention (ATT), memory networks (MEM), and bi-attention (BIA). 5 Results and Analysis This section first discusses prediction results of first re-entry in Section 5.1. We then present the results of the second and third re-entry prediction in Section 5.2, as well as an analysis on user history effects. Section 5.3 then provides explanations on what we learn from the joint effects from context and user history, indicative of user re-entries. Finally, we conduct a human study to compare human performance on the same task with our best model (Section 5.4). 5.1 First Re-entry Prediction Results In main experiment, we adopt the automatic evaluation metrics — AUC, F1 score, precision, and recall, and focus on the prediction of the major re-entry type — first re-entry, where conversation context up to user’s first participation is given. As shown in Table 1, most users, if re-entry, only return once to a conversation. Also, in conversation management, the prediction of first re-entry is a challenging yet practical problem. We will discuss second and third re-entry prediction later in Section 5.2. The comparison results are reported in Table 2. On both datasets, we observe: • First re-entry prediction is challenging. All models produce AUC and F1 scores below 70. 2815 Models Twitter Reddit AUC F1 Score Precision Recall AUC F1 Score Precision Recall Baselines RANDOM 51.0 45.0 40.3 50.9 49.4 32.6 24.5 48.7 HISTORY 50.1 46.4 42.2 51.4 50.7 35.2 26.9 50.9 ALL-YES 50.0 54.9 37.9 100.0 50.0 38.5 23.8 100.0 S.O.T.A CCCT 57.7 57.0 45.5 76.4 59.9 39.8 44.7 36.0 W/O History AVG-EMBED 60.4 59.0 43.5 91.8 63.7 42.4 31.0 67.2 CNN 58.8 59.1 43.2 93.5 64.0 42.8 31.1 68.5 BILSTM 60.4 59.4 45.8 85.0 64.1 43.1 31.4 69.5 With History BILSTM+CON 51.0 58.0 40.9 100.0 50.1 38.6 24.0 98.3 BILSTM+ATT 58.4 59.0 44.6 87.3 60.3 41.3 27.8 82.4 BILSTM+MEM 61.3 59.9 45.7 87.5 65.5 43.7 31.8 69.9 BILSTM+BIA 62.7 61.1 47.0 87.7 67.1 45.4 33.9 68.9 Table 2: Results on first re-entry prediction. The best results in each column are in bold. Model BILSTM+BIA yields significantly better AUC and F1 scores than all other comparisons (p < 0.05, paired t-test). In particular, models built on rules and features with shallow content and network features perform poorly, suggesting the need of better understanding of conversations or more information like user’s chatting history. We also observe that HISTORY yields only slightly better results than RANDOM. It suggests that users’ re-entries depend on not only their past re-entry patterns, but also the conversation context. • Well-encoded user chatting history is effective. Among neural models, our BILSTM+MEM and BILSTM+BIA models outperform other comparisons by successfully modeling users’ previous messages and their alignment with the topics of ongoing conversations. However, the opposite observation is drawn for BILSTM+CON and BILSTM+ATT. It is because the interactions between context and user history are effective yet complex, requiring well-designed merging mechanisms to exploit their joint effects. • Bi-attention mechanism better aligns the users’ interests and the conversation topics. BILSTM+BIA achieves the best AUC and F1 scores, significantly outperforming all other comparison models on both datasets. In particular, it beats BILSTM+MEM, which also able to learn the interaction between user history and conversation content, indicating the effectiveness of bi-attention over memory networks in this task. Interestingly, comparing the results on the two datasets, we notice all models yield better recall and F1 on Twitter than Reddit. This is due to the 50 60 70 80 90 100 1 2 3 All-Yes CCCT BiLSTM BiLSTM+MEM BiLSTM+BiA (a) Twitter Dataset 30 40 50 60 70 80 1 2 3 All-Yes CCCT BiLSTM BiLSTM+MEM BiLSTM+BiA (b) Reddit Dataset Figure 4: F1 scores for prediction on the first, second, and third re-entries (given the conversation context until the last entry). X-axis: # of turns in the given conversation context. Both figures, from left to right, show the F1 scores by ALL-YES, CCCT, BILSTM, BILSTM+MEM, and BILSTM+BIA. fact that Reddit users are more likely to abandon conversations, reflected as the fewer number of entries in Table 1. Twitter users, on the other hand, tend to stay longer in the conversations, which encourages all models to predict the return of users. 5.2 Predicting Re-entries with Varying Context and User History Here we study the effects of varying conversation context and user history over re-entry prediction. Results with Varying Context. We first discuss model performance given different amounts of conversation context by varying the number of user entries. Figure 4 shows the F1 scores for predicting the first, second, and third re-entries. For predicting second or third re-entries, turns of current context until given user’s second or third entry 2816 30 40 50 60 70 80 90 100 # of msgs in user history Twitter Reddit Figure 5: F1 scores of model BILSTM+BIA on first re-entry prediction, with varying numbers of chatting messages given in user history. will be given. As can be seen, all models’ performance monotonically increases when more context is observed. Our BILSTM+BIA uniformly outperforms other methods in all setups. Interestingly, baseline ALL-YES achieves the most performance gain when additional context is given. This implies that the more a user contributes to a conversation, the more likely they will come back. Results with Varying User History. We further analyze how model performance differs when different amounts of messages are given in the user history. From Figure 5, we can see that it generally yields better F1 scores when more messages are available for the user history, suggesting the usefulness of chatting history to signal user re-entries. The performance on Reddit does not increase as fast as observed on Twitter, which may mainly because the context from Reddit conversations is often limited. 5.3 Further Discussion We further discuss our models with an ablation study and a case study to understand and interpret their prediction results. Ablation Study. To examine the contribution of each component in our framework, we present an ablation study on first re-entry prediction task. Table 3 shows the results of our best full model (BILSTM+BIA) together with its variant without using turn-level auxiliary meta at (defined in Section 3.1 to record user activity and replying relations in context), and that without structure modeling layer (to capture conversation discourse in context described in Section 3.2); also compared are variants without using user chatting history (described in Section 3.3). Our full model yields the best F1 scores, showModels Twitter Reddit F1 Pre Rec F1 Pre Rec W/O History W/O SML 58.8 42.6 95.1 39.6 25.2 92.9 With SML 59.4 45.9 85.0 43.1 31.4 69.5 With History W/O SML 57.5 43.2 86.7 43.8 31.3 74.4 W/O Meta 60.4 46.6 86.1 44.3 31.3 75.8 Full model 61.1 47.0 87.7 45.4 33.9 68.9 Table 3: Results of our variants. SML: structure modeling layer. Meta: auxiliary triples at. Our full model BILSTM+BIA obtains the best F1. Models Conv. 1 (C1) Conv. 2 (C2) CCCT 1.0 1.0 BILSTM 0.386 0.480 BILSTM+MEM 0.583 0.712 BILSTM+BIA 0.460 0.581 Table 4: Predicted probabilities by different models for user U1’s re-entry to conversations C1 and C2 in Figure 1. CCCT can only yield binary outputs. For other neural models, predicting threshold is 0.5. ing the joint effects of context and user history can usefully indicate user re-entries. We also see that auxiliary triples, though conveying simple meta data for context turns, are helpful in our task. In addition, interestingly, conversation structure looks more effective in models leveraging user history, because they can learn deeper semantic relations between context turns and user chatting messages. Case Study. We further utilize a case study based on the sample conversations shown in Figure 1 to demonstrate what our model learns. Table 4 displays the outputs from different models on estimating how likely U1 will re-engage in conversation 1 (C1) and conversation 2 (C2), where U1 returns to the latter. All neural models successfully forecast that U1 is more likely to reengage in C2, while only BILSTM+BIA yields correct results (given threshold 0.5). We further visualize the attention weights output by BILSTM+BIA’s bi-attention mechanism with a heatmap in Figure 6. As can be seen, it assigns higher attention values to turns T2 and T3 in conversation C2, due to their topical similarity with user U1’s interests, i.e. movies, as inferred from their previous messages about Let Me In. The attention weights then guide the final prediction for higher chance of re-entry to C2 rather than C1. 2817 Conv. 1 Conv. 2 T1[U2] T2[U1] T1[U3] T2[U4] T3[U5] T4[U1] H1 H2 H3 H4 Figure 6: Attention output of model BILSTM+BIA for the two sample conversations in Figure 1. Predictor Twitter Reddit Human 1 26 (29) 30 (30) Human 2 25 (28) 28 (29) BILSTM+BIA 35 33 Table 5: Numbers of correct predictions made by humans, reading conversation context only and further seeing users’ chatting history (boldfaced numbers), compared to the results of our best model in same setting. A random guess gives 25 (out of 50 pairs). 5.4 Comparing with Humans We are also interested in how human performs for the first re-entry prediction task, in order to find out how challenging such a task is. To achieve this, we design a human evaluation. Concretely, from each dataset, we randomly sample 50 users who have been involved in at least 4 conversations, with both re-entry and non re-entry behaviors exhibited. Then for each user u, we construct paired samples based on randomly selected conversations c1 and c2, where u re-engage in one but not the other. The rest of the conversations that u participated in are collected as their user history. Then, we invite two humans who are fluent speakers of English, to predict which conversation user u will re-engage, after reading the context up to user’s first participation in the paired conversations c1 and c2. They are requested to make a second prediction after reading user’s chatting history. Humans’ prediction performance is shown in Table 5 along with BILSTM+BIA model’s output on the same data. As can be seen, humans can only give marginally better predictions than a random guess, i.e., 25 out of 50 pairs. Their performance improves after reading the user’s previous posts, however, still falls behind our model’s predictions. This indicates the ability of our model to learn from large-scaled data and align users’ interests with conversation content. In addition, we notice that humans yield better performance on Reddit conversations than Twitter. It might be due to the fact that Reddit conversations are more focused, and it is easier for humans to identify the discussion points. While for Twitter discussions, the informal language usage further hinders humans’ judgment. 6 Conclusion We study the joint effects of conversation context and user chatting history for re-entry prediction. A novel neural framework is proposed for learning the interactions between two source of information. Experimental results on two large-scale datasets from Twitter and Reddit show that our model with bi-attention yields better performance than the previous state of the art. Further discussions show that the model learns meaningful representations from conversation context and user history and hence exhibits consistent better performance given varying lengths of context or history. We also conduct a human study on the first re-entry prediction task. Our proposed model is observed to outperform humans, benefiting from its effective learning from large-scaled data. Acknowledgements This work is partly supported by HK RGC GRF (14232816, 14209416, 14204118), NSFC (61877020). Lu Wang is supported in part by National Science Foundation through Grants IIS1566382 and IIS-1813341. We thank the three anonymous reviewers for the insightful suggestions on various aspects of this work. References Noor Aldeen Alawad, Aris Anagnostopoulos, Stefano Leonardi, Ida Mele, and Fabrizio Silvestri. 2016. Network-aware recommendations of novel tweets. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 913–916. ACM. Yoav Artzi, Patrick Pantel, and Michael Gamon. 2012. Predicting responses to microblog posts. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 602–606. Association for Computational Linguistics. Lars Backstrom, Jon M. Kleinberg, Lillian Lee, and Cristian Danescu-Niculescu-Mizil. 2013. Characterizing and curating conversation threads: expan2818 sion, focus, volume, re-entry. In Sixth ACM International Conference on Web Search and Data Mining, WSDM 2013, Rome, Italy, February 4-8, 2013, pages 13–22. Ceren Budak and Rakesh Agrawal. 2013. On participation in group chats on Twitter. In Proceedings of the 22nd International Conference on World Wide Web, pages 165–176. ACM. Kailong Chen, Tianqi Chen, Guoqing Zheng, Ou Jin, Enpeng Yao, and Yong Yu. 2012. Collaborative personalized tweet recommendation. In Proceedings of the 35th international ACM SIGIR Conference on Research and development in information retrieval, pages 661–670. ACM. Hao Cheng, Hao Fang, and Mari Ostendorf. 2017. A factored neural network model for characterizing online discussions in vector space. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2296–2306. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315– 323. Liangjie Hong, Aziz S Doumith, and Brian D Davison. 2013. Co-factorization machines: modeling user interests and predicting individual decisions in Twitter. In Proceedings of the sixth ACM International Conference on Web Search and Data Mining, pages 557–566. ACM. Yunhao Jiao, Cheng Li, Fei Wu, and Qiaozhu Mei. 2018. Find the conversation killers: A predictive study of thread-ending posts. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1145–1154. International World Wide Web Conferences Steering Committee. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics-Volume 1, pages 63–70. Association for Computational Linguistics. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, pages 172–180. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Rui Yan, Mirella Lapata, and Xiaoming Li. 2012. Tweet recommendation with graph co-ranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 516–525. Association for Computational Linguistics. Victoria Zayats and Mari Ostendorf. 2018. Conversation modeling on reddit using a graph-structured LSTM. TACL, 6:121–132. Xingshan Zeng, Jing Li, Lu Wang, Nicholas Beauchamp, Sarah Shugars, and Kam-Fai Wong. 2018. Microblog conversation recommendation via joint modeling of topics and discourse. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, Volume 1 (Long Papers), pages 375–385. Justine Zhang, Jonathan P Chang, Cristian DanescuNiculescu-Mizil, Lucas Dixon, Yiqing Hua, Nithum Thain, and Dario Taraborelli. 2018. Conversations gone awry: Detecting early signs of conversational failure. arXiv preprint arXiv:1805.05345. Qi Zhang, Yeyun Gong, Ya Guo, and Xuanjing Huang. 2015. Retweet behavior prediction using hierarchical Dirichlet process. In AAAI, pages 403–409.
2019
270
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2819–2829 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2819 CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech Yi-Ling Chung1,2, Elizaveta Kuzmenko2, Serra Sinem Tekiro˘glu1, and Marco Guerini1 1Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento, Italy [email protected], [email protected], [email protected] 2University of Trento, Italy [email protected] Abstract Although there is an unprecedented effort to provide adequate responses in terms of laws and policies to hate content on social media platforms, dealing with hatred online is still a tough problem. Tackling hate speech in the standard way of content deletion or user suspension may be charged with censorship and overblocking. One alternate strategy, that has received little attention so far by the research community, is to actually oppose hate content with counter-narratives (i.e. informed textual responses). In this paper, we describe the creation of the first large-scale, multilingual, expert-based dataset of hate speech/counternarrative pairs. This dataset has been built with the effort of more than 100 operators from three different NGOs that applied their training and expertise to the task. Together with the collected data we also provide additional annotations about expert demographics, hate and response type, and data augmentation through translation and paraphrasing. Finally, we provide initial experiments to assess the quality of our data. 1 Introduction Together with the rapid growth of social media platforms, the amount of user-generated content is steadily increasing. At the same time, abusive and offensive language can spread quickly and is difficult to monitor. Defining hate speech is challenging for the broadness and the nuances in cultures and languages. For instance, according to UNESCO hate speech refers to “expressions that advocate incitement to harm based upon the targets being identified with a certain social or demographic group” (Gagliardone et al., 2015). Victims of hate speech are usually targeted because of various aspects such as gender, race, religion, sexual orientation, physical appearance. For example, Sentence 1 shows explicit hostility towards a specific group with no reasons explained1. (1) I hate Muslims. They should not exist. Online hate speech can deepen prejudice and stereotypes (Citron and Norton, 2011) and bystanders may receive false messages and consider them correct. Although Social Media Platforms (SMP) and governmental organizations have elicited unprecedented attention to take adequate actions against hate speech by implementing laws and policies (Gagliardone et al., 2015), they do not seem to achieve the desired effect, since hate content is continuously evolving and adapting, making its identification a tough problem (Davidson et al., 2017). The standard approach used on SMPs to prevent hate spreading is the suspension of user accounts or deletion of hate comments, while trying to weigh the right to freedom of speech. Another strategy, which has received little attention so far, is to use counter-narratives. A counternarrative (sometimes called counter-comment or counter-speech) is a response that provides nonnegative feedback through fact-bound arguments and is considered as the most effective approach to withstand hate speech (Benesch, 2014; Schieb and Preuss, 2016). In fact, it preserves the right to freedom of speech, counters stereotypes and misleading information with credible evidence. It can also alter the viewpoints of haters and bystanders, by encouraging the exchange of opinions and mutual understanding, and can help de-escalating the conversation. A counter-narrative such as the one in Sentence 2 is a non-negative, appropriate response to Sentence 1, while the one in 3 is not, since it escalates the conversation. 1It is crucial to note that this paper contains examples of language which may be offensive to some readers. They do not represent the views of the authors. 2820 (2) Muslims are human too. People can choose their own religion. (3) You are truly one stupid backwards thinking idiot to believe negativity about Islam. In this respect, some NGOs are tackling hatred online by training operators to monitor SMPs and to produce appropriate counter-narratives when necessary. Still, manual intervention against hate speech is a toil of Sisyphus, and automatizing the countering procedure would increase the efficacy and effectiveness of hate countering (Munger, 2017). As a first step in the above direction, we have nichesourced the collection of a dataset of counternarratives to 3 different NGOs. Nichesourcing is a specific form of outsourcing that harnesses the computational efforts from niche groups of experts rather than the ‘faceless crowd’ (De Boer et al., 2012). Nichesourcing combines the strengths of the crowd with those of professionals (De Boer et al., 2012; Oosterman et al., 2014). In our case we organized several data collection sessions with NGO operators, who are trained experts, specialized in writing counter-narratives that are meant to fight hatred and de-escalate the conversation. In this way we build the first large-scale, multilingual, publicly available, expert-based dataset of hate speech/counter-narrative pairs for English, French and Italian, focusing on the hate phenomenon of Islamophobia. The construction of this dataset involved more than 100 operators and more than 500 person-hours of data collection. After the data collection phase, we hired three non-expert annotators, that performed additional tasks that did not require specific domain expertise (200 person-hours of work): paraphrase original hate content to augment the number of pairs per language, annotate hate content subtopic and counter-narrative type, translate content from Italian and French to English to have parallel data across languages. This additional annotation grants that the dataset can be used for several NLP tasks related to hate speech. The remainder of the paper is structured as follows. First, we briefly discuss related work on hate speech in Section 2. Then, in Section 3, we introduce our CONAN dataset and some descriptive statistics, followed by a quantitative and qualitative analysis on our dataset in Section 4. We conclude with our future works in Section 5. 2 Related Work With regard to hatred online, we will focus on three research aspects about the phenomenon, i.e. (i) publicly available datasets, (ii) methodologies for detecting hate speech, (iii) seminal works that focus on countering hate speech. Hate datasets. Several hate speech datasets are publicly available, usually including a binary annotation, i.e. whether the content is hateful or not (Reynolds et al., 2011; Rafiq et al., 2015; Hosseinmardi et al., 2015; de Gibert et al., 2018; ElSherief et al., 2018). Also, several shared tasks have released their datasets for hate speech detection in different languages. For instance, there is the German abusive language identification on SMPs at Germeval (Bai et al., 2018), or the hate speech and misogyny identification for Italian at EVALITA (Del Vigna et al., 2017; Fersini et al., 2018) and for Spanish at IberEval (Ahluwalia et al., 2018; Shushkevich and Cardiff, 2018). Bilingual hate speech datasets are also available for Spanish and English (Pamungkas et al., 2018). Waseem and Hovy (2016) released 16k annotated tweets containing 3 offense types: sexist, racist and neither. Ross et al. (2017) first released a German hate speech dataset of 541 tweets targeting refugee crisis and then offered insights for the improvement on hate speech detection by providing multiple labels for each hate speech. It should be noted that, due to the copyright limitations, usually hate speech datasets are distributed as a list of tweet IDs making them ephemeral and prone to data loss (Klubiˇcka and Fern´andez, 2018). For this reason, Sprugnoli et al. (2018) created a multi-turn annotated WhatsApp dataset for Italian on Cyberbullying, using simulation session with teenagers to overcome the data collection/loss problem. Hate detection. Several works have investigated online English hate speech detection and the types of hate speech. Owing to the availability of current datasets, researchers often use supervisedapproaches to tackle hate speech detection on SMPs including blogs (Warner and Hirschberg, 2012; Djuric et al., 2015; Gitari et al., 2015), Twitter (Xiang et al., 2012; Silva et al., 2016; Mathew et al., 2018a), Facebook (Del Vigna et al., 2017), and Instagram (Zhong et al., 2016). The predominant approaches are to build a classifier trained on various features derived from lexical resources 2821 (Gitari et al., 2015; Burnap and Williams, 2015, 2016), n-grams (Sood et al., 2012; Nobata et al., 2016) and knowledge base (Dinakar et al., 2012), or to utilize deep neural networks (Mehdad and Tetreault, 2016; Badjatiya et al., 2017). In addition, other approaches have been proposed to detect subcategories of hate speech such as antiblack (Kwok and Wang, 2013) and racist (Badjatiya et al., 2017). Silva et al. (2016) studied the prevalent hate categories and targets on Twitter and Whisper, but limited hate speech only to the form of I <intensity> <user intent> <any word>. A comprehensive overview of recent approaches on hate speech detection using NLP can be found in (Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018). Hate countering. Lastly, we should mention that a very limited number of studies have been conducted on counter-narratives (Benesch, 2014; Schieb and Preuss, 2016; Ernst et al., 2017; Mathew et al., 2018b). Mathew et al. (2018b) collected Youtube comments that contain counternarratives to YouTube videos of hatred. Schieb and Preuss (2016) studied the effectiveness of counter-narrative on Facebook via a simulation model. The study of Wright et al. (2017) shows that some arguments among strangers induce favorable changes in discourse and attitudes. To our knowledge, there exists only one very recent seminal work (Mathew et al., 2018a), focusing on the idea of collecting hate message/counternarrative pairs from Twitter. They used a simple pattern in the form (I <hate> <category>) to first extract hate tweets and then manually annotate counter-narratives found in the responses. Still, there are several shortcomings of their approach: (i) this dataset already lost more that 60% of the pairs in a small time interval (content deletion) since only tweet IDs are distributed, (ii) it is only in English language, (iii) the dataset was collected from a specific template which limits the coverage of hate speech, and (iv) many of these answers come from ordinary web users and contain -for example- offensive text, that do not meet the de-escalation intent of NGOs and the standards/quality of their operators’ responses. Considering the aforementioned works, we can reasonably state that no suitable corpora of counter-narratives is available for our purposes, especially because the natural ‘countering’ data that can be found on SMP – such as example 3 – often does not meet the required standards. For this reason we decided to build CONAN, a dataset of COunter NArratives through Nichesourcing. 3 CONAN Dataset In this section, we describe the characteristics that we intend our dataset to posses, the nichesourcing methodology we employed to collect the data and the further expansion of the dataset together with the annotation procedures. Moreover, we give some descriptive statistics and analysis for the collected data. CONAN can be downloaded at the following link https://github.com/ marcoguerini/CONAN. 3.1 Fundamentals of the Dataset Considering the shortcomings of the existing datasets and our aim to provide a reliable resource to the research community, we want CONAN to comply with the following characteristics: Copy-free data. We want to provide a dataset that is not ephemeral, by releasing only copy-free textual data that can be directly exploited by researches without data loss across time, as originally pointed out in (Klubiˇcka and Fern´andez, 2018). Multilingual data. Our dataset is produced as a multilingual resource to allow for cross lingual studies and approaches. In particular, it contains hate speech/counter-narrative pairs for English, French, and Italian. Expert-based data. The hate speech/counternarrative pairs have been collected through nichesourcing to three different NGOs from United Kingdom, France and Italy. Therefore, both the responses and the hate speech itself are expert-based and composed by operators, specifically trained to oppose online hate speech. Protecting operator’s identity. We aim to create a secure dataset that will not disclose the identity of operators in order to protect them against being tracked and attacked online by hate spreaders. This might be the case if we were to collect their real SMP activities, following a procedure similar to the one in Mathew et al. (2018a). Therefore our data collection was based on simulated SMP activity. 2822 Demographic-based metadata. Demographicbased NLP can be used for several tasks, such as characterizing personal linguistic styles (Johannsen et al., 2015; Hovy and Spruit, 2016; van der Goot et al., 2018; DellOrletta and Nissim, 2018), improving text classification (Mandel et al., 2012; Volkova et al., 2013; Hovy, 2015), or personalizing conversational agents (Qiu and Benbasat, 2010; Mazar´e et al., 2018a). In this work, we collect demographic information of participants; i.e. gender, age, and education level, to provide data for counter-narrative personalization. 3.2 Dataset Collection We have followed the same data collection procedure for each language to grant the same conditions and comparability of the results. The data collection has been conducted along the following steps: 1. Hate speech collection. For each language we asked two native speaker experts (NGO trainers) to write around 50 prototypical islamophobic short hate texts. This step was used to ensure that: (i) the sample uniformly covers the typical ‘arguments’ against Islam as much as possible, (ii) we can distribute to the NLP community the original hate speech as well as its counter-narrative. 2. Preparation of data collection forms. We prepared three online forms (one per language) with the same instructions for the operators translated in the corresponding language. For each language, we prepared 2 types of forms: in the first users can respond to hate text prepared by NGO trainers, in the second users can write their own hate text and counter-narratives at the same time. In each form operators were first asked to anonymously provide their demographic profile including age, gender, and education level; secondly to compose up to 5 counter-narratives for each hate text. 3. Counter-narrative instructions. The operators were already trained to follow the guidelines of the NGOs for creating proper counter-narratives. Such guidelines are highly consistent across languages and across NGOs, and are similar to those in ‘Get the Trolls Out’ project2. These guidelines emphasize using fact-bounded information and non-offensive language in order to avoid escalating the discussion as outlined in Table 1. Furthermore, for our specific data collection task, op2http://stoppinghate.getthetrollsout.org/ erators were asked to follow their intuitions without over-thinking and to compose reasonable responses. The motivation for this instruction was to collect as much and as diverse data as possible, since for current AI technologies (such as deep learning approaches) quantity and quality are of paramount importance and few perfect examples do not provide enough generalization evidence. Other than this instruction and the fact of using a form – instead of responding on a SMP – operators carried out their normal counter messaging activities. 4. Data collection sessions. For each language, we performed three data collection sessions on different days. Each session lasted roughly three hours3 and had a variable number of operators – usually around 20 (depending on their availability). Operators are different from NGO trainers and might change across sessions. Operators were gathered in the same room (NGO premises) with a computer, and received a brief introduction from the NGO trainer. This introduction was about our specific counter-narrative collection task, as described above. A sample of the collected data for the three languages is given in Table 2. 3.3 Dataset Augmentation and Annotation After the data collection phase, we hired three non-expert annotators, that performed additional work that did not require specific domain expertise. Their work amounted to roughly 200 hours. In particular they were asked to (i) paraphrase original hate content to augment the number of pairs per language, (ii) annotate hate speech subtopics and counter-narrative types (iii) translate content from French and Italian to English to have parallel data across languages. To guarantee data quality, after the annotation and the augmentation phase, a validation procedure has been conducted by NGO trainers on the newly generated data for their specific language. Paraphrasing for augmenting data pairs. Recent deep learning approaches are data hungry, and data augmentation is a way to mitigate the problem. For instance, to improve text classification performance for sexism, new tweets are generated by replacing words in original tweets with synonyms from ConceptNet (Sharifirad et al., 3Duration was decided during a pilot session: after three hours operators were no longer effective in producing counter-narratives, since it is a cognitively fatiguing task. 2823 Don’t be abusive Before submitting a response, make sure the response does not spread any hate, bigotry, prejudice or illegal content. We want to maintain the conversations peaceful and not to degenerate into a conflict. We are talking about people not categories. Think about the objectives Before writing a response, think about the effect it may create and the one you want to obtain. Paying attention to the objectives will help use proper words. Call for influential users Enlisting influential supporters (civic leaders, politicians, subject experts) will help bring attention and deepen the effect to counter-narrative. Use credible evidence The information in hate speech may be confusing and misleading. Ask for clarification when necessary. Counter it with credible evidence and use reliable sources. Think about the tone We can demonstrate understanding and support to those who might be attacked. Be careful of using sarcasm, humour, parody and satire. We can use them, if we are able to master it as they run the danger of being antagonistic. Table 1: Counter-narrative guidelines adopted by the three NGOs. Hate Speech Counter-Narrative Every Muslim is a potential terrorist. Every Muslim is also a potential peacemaker, doctor, philanthropist... What’s your point? I musulmani violentano le nostre donne e vanno castrati. Buonasera, se ho capito bene, lei mi sta dicendo che tutti gli uomini adulti di fede islamica presenti in Italia, hanno violentato, violentano o violenteranno le donne italiane? Pu`o portare dei dati a supporto della sua tesi? Muslims rape our women and must be neutered. Good evening, if I understood correctly, you’re telling me that all adult men of Islamic faith present in Italy, raped, rape or are going to rape Italian women? Can you bring in data in support of your statement? Le voile est contraire `a la la¨ıcit´e. Bien au contraire la la¨ıcit´e permet `a tout citoyen de vivre librement sa confession. The veil is contrary to secularism. On the contrary, secularism allows every citizen to freely profess his faith. Table 2: Example pairs for the three languages, along with English translations. 2018). Other examples of data augmentation strategies are back translation (Sennrich et al., 2016) and gold standard repetition (Chatterjee et al., 2017) that have been used in sequence-tosequence Machine Translation. In all these tasks, adding the synthetic pairs to the original data always results in significant improvements in the performance. In line with the idea of artificially augmenting pairs, and since in our dataset we have many responses for few hate speeches, we produced two manual paraphrases of each hate speech and paired them with the counter-narratives of the original one. Therefore we increased the number of our pairs by three times in each language. Counter-narrative type annotation. In this task, we asked the annotators to label each counter-narrative with types. Based on the counter-narrative classes proposed by (Benesch et al., 2016; Mathew et al., 2018b), we defined the following set of types: PRESENTATION OF FACTS, POINTING OUT HYPOCRISY OR CONTRADICTION, WARNING OF CONSEQUENCES, AFFILIATION, POSITIVE TONE, NEGATIVE TONE, HUMOR, COUNTER-QUESTIONS, OTHER. With respect to the original guidelines, we added a new type of counter-narrative called COUNTER-QUESTIONS to cover expressions/replies using a question that can be thoughtprovoking or asking for more evidence from the hate speaker. In fact, a preliminary analysis showed that this category is quite frequent among operator responses. Finally, each counternarrative can be labeled with more than one type, thus making the annotation more fine-grained. Two annotators per language annotated all the counter-narratives independently. A reconciliation 2824 phase was then performed for the disagreement cases. Hate speech sub-topic annotation. We labeled sub-topics of hate content to have an annotation that can be used both for fine grained hate speech classification, and for exploring the correlation between hate sub-topics and counternarrative types. The following sub-topics are determined for the annotation based on the guidelines used by NGOs to identify hate messages (mostly consistent across languages): CULTURE, criticizing Islamic culture or particular aspects such as religious events or clothes; ECONOMICS, hate statements about Muslims taking European workplaces or not contributing economically to the society; CRIMES, hate statements about Muslims committing actions against the law; RAPISM, a very frequent topic in hate speech, for this reason it has been isolated from the previous category; TERRORISM, accusing Muslims of being terrorists, killers, preparing attacks; WOMEN OPPRESSION, criticizing Muslims for their behavior against women; HISTORY, stating that we should hate Muslims because of historical events; OTHER/GENERIC, everything that does not fall into the above categories. As before, two annotators per language annotated all the material. Also in this annotation task, a reconciliation phase was performed for the disagreement cases. Parallel corpus of language pairs. To allow studying cross-language approaches to counternarratives and more generally to increase language portability, we also translated the French and the Italian pairs (i.e. hate speech and counternarratives) to English. Similar motivations can be found in using zero-short learning to translate between unseen language pairs during training (Johnson et al., 2017). With parallel corpora we can exploit cross-lingual word embeddings to enable knowledge transfer between languages (Schuster et al., 2018). 3.4 Dataset Statistics In total we had more than 500 hours of data collection with NGOs, where we collected 4078 hate speech/counter-narrative pairs; specifically, 1288 pairs for English, 1719 pairs for French, and 1071 pairs for Italian. At least 111 operators participated in the 9 data collection sessions and each English French Italian original pairs 1288 1719 1071 augmen. pairs 2576 3438 2142 transl. pairs 2790 total pairs 6654 5157 3213 HS 136 50 62 CN per HSµ 9.47 34.38 17.27 CN per HSsd 7.56 53.86 26.48 HS vocabulary 947 193 343 HS+aug. vocab. 1631 333 790 CN vocabulary 3556 4018 3728 HS words 2950 434 751 HS+aug. words 9770 1172 2633 CN words 27677 23730 23129 HS wordsµ 21.69 8.68 12.11 HS wordssd 10.29 4.02 6.69 HS+aug. wordsµ 18.72 5.31 14.16 HS+aug. wordssd 10.05 4.73 7.65 CN wordsµ 21.49 13.80 21.60 CN wordssd 11.06 11.44 12.42 Table 3: Main statistics of the dataset. HS stands for Hate Speech, CN stands for Counter-Narrative. counter-narrative needed about 8 minutes on average to be composed. The paraphrasing of hate messages and the translation of French and Italian pairs to English brought the total number of pairs to more than 15 thousand. Regarding the token length of counter-narratives, we observe that there is a consistency across the three languages with 14 tokens on average for French, and 21 for Italian and English. Considering counter-narrative length in terms of characters, only a small portion (2% for English, 1% for French, and 5% for Italian) contains more than 280 characters, which is the character limit per message in Twitter, one of the key SMPs for hate speech research. Further details on the dataset can be found in Table 3. Regarding demographics, the majority of responses were written by operators that held a bachelor’s or a higher degree (95% for English, 65% for French, and 69% for Italian). As it is shown in Table 4, there is a good balance in responses with regard to declared gender, with a slight predominance of counter-narratives written by female operators in English and Italian (53 and 55 per cent respectively) while a slight predominance of counter-narratives written by male operators is present in French (61%). Finally, the predominant age bin is 21-30 for English and Italian, 2825 while for French is in the range 31-40. EN FR IT < high school 5% 14% high school 14% 10% < university 5% 16% 6% bachelor 51% 17% 34% master 44% 35% 30% PhD 13% 5% female 53% 39% 55% male 47% 61% 45% <= 20 15% 21 - 30 74% 15% 42% 31 - 40 51% 7% 41 - 50 18% 20% 15% 51 - 60 11% 16% > 60 8% 3% 5% Table 4: Demographic profile of the operators. Type EN FR IT affiliation 1 4 1 consequences 0 1 0 denouncing 19 18 13 facts 38 37 47 humor 8 6 5 hypocrisy 16 14 10 negative 0 0 0 other 0 4 1 positive 6 5 7 question 12 11 16 Table 5: Counter-narrative type distribution over the three languages (% over the total number of labels). Considering the annotation tasks, we give the distribution of counter-narrative types per language in Table 5. As can be seen in the table, there is a consistency across the languages such that FACTS, QUESTION, DENOUNCING, and HYPOCRISY are the most frequent counternarrative types. Before the reconciliation phase, the agreement between the annotators was moderate: Cohen’s Kappa4 0.55 over the three languages. This can be partially explained by the complexity of the messages, that often fall under more than one category (two labels were assigned in more than 50% of the cases). On the other hand, for hate speech sub-topic annotation, the agree4Computed using Mezzich’s methodology to account for possible multiple labels that can be assigned to a text by each annotator (Mezzich et al., 1981). ment between the annotators was very high even before the reconciliation phase (Cohen’s Kappa 0.92 over the three languages). A possible reason is that such messages represent short and prototypical hate arguments, as explicitly requested to the NGO trainers. In fact, the vast majority has only one label. In Table 6, we give a distribution of hate speech sub-topics per language. As can be observed in the table, the labels are distributed quite evenly among sub-topics and across languages - in particular, CULTURE, ISLAMIZATION, GENERIC, and TERRORISM are the most frequent sub-topics. Type EN FR IT crimes 10 0 7 culture 30 26 11 economics 4 1 8 generic 20 27 8 islamization 11 7 36 rapism 15 0 7 terrorism 6 14 19 women 4 25 4 Table 6: hate speech sub-topic type distribution over the three languages (% over the total number of labels). 4 Evaluation In order to assess the quality of our dataset, we ran a series of preliminary experiments that involved three annotators to judge hate speech/counternarrative pairs along a yes/no dimension. Augmentation reliability. The first experiment was meant to assess how natural a pair is when coupling a counter-narrative with the manual paraphrase of the original hate speech it refers to. We administered 120 pairs to the subjects to be evaluated: 20 were kept as they are so to have an upper bound representing ORIGINAL pairs. In 50 pairs we replaced the hate speech with a PARAPHRASE, and in the 50 remaining pairs, we randomly matched a hate speech with a counternarrative from another hate speech (UNRELATED baseline). Results show that 85% of the times in the ORIGINAL condition hate speech and counternarrative were considered as clearly tied, followed by the 74% of times by PARAPHRASE condition, and only 4% of the UNRELATED baseline, this difference is statistically significant with p < .001 (w.r.t. χ2 test). This indicates that the quality of augmented pairs is almost as good as the one of original pairs. 2826 Augmentation for counter-narrative selection. Once we assessed the quality of augmented pairs, we focused on the possible contribution of the paraphrases also in standard information retrieval approaches that have been used as baselines in dialogue systems (Lowe et al., 2015; Mazar´e et al., 2018b). We first collected a small sample of natural/real hate speech from Twitter using relevant keywords (such as “stop Islam”) and manually selected those that were effectively hate speeches. We then compared 2 tf-idf response retrieval models by calculating the tf-idf matrix using the following document variants: (i) hate speech and counter-narrative response, (ii) hate speech, its 2 paraphrases, and counter-narrative response. The final response for a given sample tweet is calculated by finding the highest score among the cosine similarities between the tf-idf vectors of the sample and all the documents in a model. For each of the 100 natural hate tweets, we then provided 2 answers (one per approach) selected from our English database. Annotators were then asked to evaluate the responses with respect to their relevancy/relatedness to the given tweet. Results show that introducing the augmented data as a part of the tf-idf model provides 9% absolute increase in the percentage of the agreed ‘very relevant’ responses, i.e. from 18% to 27% - this difference is statistically significant with p < .01 (w.r.t. χ2 test). This result is especially encouraging since it shows that the augmented data can be helpful in improving even a basic automatic counter-narrative selection model. Impact of Demographics. The final experiment was designed to assess whether demographic information can have a beneficial effect on the task of counter-narrative selection/production. In this experiment, we selected a subsample of 230 pairs from our dataset written by 4 male and 4 female operators that were controlled for age (i.e. same age range). We then presented our subjects with each pair in isolation and asked them to state whether they would definitely use that particular counter-narrative for that hate speech or not. Note that, in this case, we did not ask whether the counter-narrative was relevant, but if they would use that given counter-narrative text to answer the paired hate speech. The results show that in the SAMEGENDER configuration (gender declared by the operator who wrote the message and gender declared by the annotator are the same), the appreciation was expressed 47% of the times, while it decreases to 32% in the DIFFERENTGENDER configuration (gender declared by the operator who wrote the message and gender declared by the annotator are different). This difference is statistically significant with p < .001 (w.r.t. χ2 test), indicating that even if operators were following the same guidelines and were instructed on the same possible arguments to build counternarratives, there is still an effect of their gender on the produced text, and this effect contributes to the counter-narrative preference in a SAMEGENDER configuration. 5 Conclusion As online hate content rises massively, responding to it with counter-narratives as a combating strategy draws the attention of international organizations. Although a fast and effective responding mechanism can benefit from an automatic generation system, the lack of large datasets of appropriate counter-narratives hinders tackling the problem through supervised approaches such as deep learning. In this paper, we described CONAN: the first large-scale, multilingual, and expert-based hate speech/counter-narrative dataset for English, French, and Italian. The dataset consists of 4078 pairs over the 3 languages. Together with the collected data we also provided several types of metadata: expert demographics, hate speech sub-topic and counter-narrative type. Finally, we expanded the dataset through translation and paraphrasing. As future work, we intend to continue collecting more data for Islam and to include other hate targets such as migrants or LGBT+, in order to put the dataset at the service of other organizations and further research. Moreover, as a future direction, we want to utilize CONAN dataset to develop a counter-narrative generation tool that can support NGOs in fighting hate speech online, considering counter-narrative type as an input feature. Acknowledgments This work was partly supported by the HATEMETER project within the EU Rights, Equality and Citizenship Programme 2014-2020. We are grateful to the following NGOs and all annotators for their help: Stop Hate UK, Collectif Contre l’Islamophobie en France, Amnesty International (Italian Section - Task force hate speech). 2827 References Resham Ahluwalia, Evgeniia Shcherbinina, Edward Callow, Anderson Nascimento, and Martine De Cock. 2018. Detecting misogynous tweets. Proc. of IberEval, 2150:242–248. Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 759–760. International World Wide Web Conferences Steering Committee. Xiaoyu Bai, Flavio Merenda, Claudia Zaghi, Tommaso Caselli, and Malvina Nissim. 2018. Rug at germeval: Detecting offensive speech in german social media. In 14th Conference on Natural Language Processing KONVENS 2018. Susan Benesch. 2014. Countering dangerous speech: new ideas for genocide prevention. Washington, DC: US Holocaust Memorial Museum. Susan Benesch, Derek Ruths, Kelly P Dillon, Haji Mohammad Saleem, and Lucas Wright. 2016. Counterspeech on twitter: A field study. Dangerous Speech Project. Available at: https://dangerousspeech.org/counterspeech-ontwitter-a-field- study/. Pete Burnap and Matthew L Williams. 2015. Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making. Policy & Internet, 7(2):223–242. Pete Burnap and Matthew L Williams. 2016. Us and them: identifying cyber hate on twitter across multiple protected characteristics. EPJ Data Science, 5(1):11. Rajen Chatterjee, Matteo Negri, Marco Turchi, Marcello Federico, Lucia Specia, and Fr´ed´eric Blain. 2017. Guiding neural machine translation decoding with external knowledge. In Proceedings of the Second Conference on Machine Translation, pages 157–168. Danielle Keats Citron and Helen Norton. 2011. Intermediaries and hate speech: Fostering digital citizenship for our information age. BUL Rev., 91:1435. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. arXiv preprint arXiv:1703.04009. Victor De Boer, Michiel Hildebrand, Lora Aroyo, Pieter De Leenheer, Chris Dijkshoorn, Binyam Tesfa, and Guus Schreiber. 2012. Nichesourcing: harnessing the power of crowds of experts. In International Conference on Knowledge Engineering and Knowledge Management, pages 16–20. Springer. Fabio Del Vigna, Andrea Cimino, Felice DellOrletta, Marinella Petrocchi, and Maurizio Tesconi. 2017. Hate me, hate me not: Hate speech detection on facebook. Felice DellOrletta and Malvina Nissim. 2018. Overview of the evalita 2018 cross-genre gender prediction (gxg) task. Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA18), Turin, Italy. CEUR. org. Karthik Dinakar, Birago Jones, Catherine Havasi, Henry Lieberman, and Rosalind Picard. 2012. Common sense reasoning for detection, prevention, and mitigation of cyberbullying. ACM Transactions on Interactive Intelligent Systems (TiiS), 2(3):18. Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Grbovic, Vladan Radosavljevic, and Narayan Bhamidipati. 2015. Hate speech detection with comment embeddings. In Proceedings of the 24th international conference on world wide web, pages 29–30. ACM. Mai ElSherief, Shirin Nilizadeh, Dana Nguyen, Giovanni Vigna, and Elizabeth Belding. 2018. Peer to peer hate: Hate speech instigators and their targets. In Twelfth International AAAI Conference on Web and Social Media. Julian Ernst, Josephine B Schmitt, Diana Rieger, Ann Kristin Beier, Peter Vorderer, Gary Bente, and Hans-Joachim Roth. 2017. Hate beneath the counter speech? a qualitative content analysis of user comments on youtube related to counter speech videos. Journal for Deradicalization, (10):1–49. Elisabetta Fersini, Debora Nozza, and Paolo Rosso. 2018. Overview of the evalita 2018 task on automatic misogyny identification (ami). Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA18), Turin, Italy. CEUR. org. Paula Fortuna and S´ergio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR), 51(4):85. Iginio Gagliardone, Danit Gal, Thiago Alves, and Gabriela Martinez. 2015. Countering online hate speech. Unesco Publishing. Ona de Gibert, Naiara Perez, Aitor Garc´ıa-Pablos, and Montse Cuadros. 2018. Hate speech dataset from a white supremacy forum. arXiv preprint arXiv:1809.04444. Njagi Dennis Gitari, Zhang Zuping, Hanyurwimfura Damien, and Jun Long. 2015. A lexicon-based approach for hate speech detection. International Journal of Multimedia and Ubiquitous Engineering, 10(4):215–230. Rob van der Goot, Nikola Ljubeˇsi´c, Ian Matroos, Malvina Nissim, and Barbara Plank. 2018. Bleaching text: Abstract features for cross-lingual gender prediction. arXiv preprint arXiv:1805.03122. 2828 Homa Hosseinmardi, Sabrina Arredondo Mattson, Rahat Ibn Rafiq, Richard Han, Qin Lv, and Shivakant Mishra. 2015. Detection of cyberbullying incidents on the instagram social network. arXiv preprint arXiv:1503.03909. Dirk Hovy. 2015. Demographic factors improve classification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 752–762. Dirk Hovy and Shannon L Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 591–598. Anders Johannsen, Dirk Hovy, and Anders Søgaard. 2015. Cross-lingual syntactic variation over age and gender. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 103–112. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Googles multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Filip Klubiˇcka and Raquel Fern´andez. 2018. Examining a hate speech corpus for hate speech detection and popularity prediction. In LREC. Irene Kwok and Yuzhou Wang. 2013. Locate the hate: Detecting tweets against blacks. In Twenty-seventh AAAI conference on artificial intelligence. Ryan Lowe, Nissan Pow, Iulian V Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 285. Benjamin Mandel, Aron Culotta, John Boulahanis, Danielle Stark, Bonnie Lewis, and Jeremy Rodrigue. 2012. A demographic analysis of online sentiment during hurricane irene. In Proceedings of the Second Workshop on Language in Social Media, pages 27–36. Association for Computational Linguistics. Binny Mathew, Navish Kumar, Pawan Goyal, Animesh Mukherjee, et al. 2018a. Analyzing the hate and counter speech accounts on twitter. arXiv preprint arXiv:1812.02712. Binny Mathew, Hardik Tharad, Subham Rajgaria, Prajwal Singhania, Suman Kalyan Maity, Pawan Goyal, and Animesh Mukherje. 2018b. Thou shalt not hate: Countering online hate speech. arXiv preprint arXiv:1808.04409. Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018a. Training millions of personalized dialogue agents. arXiv preprint arXiv:1809.01984. Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018b. Training millions of personalized dialogue agents. In EMNLP. Yashar Mehdad and Joel Tetreault. 2016. Do characters abuse more than words? In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 299–303. Juan E Mezzich, Helena C Kraemer, David RL Worthington, and Gerald A Coffman. 1981. Assessment of agreement among several raters formulating multiple diagnoses. Journal of psychiatric research, 16(1):29–39. Kevin Munger. 2017. Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 39(3):629–649. Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive language detection in online user content. In Proceedings of the 25th international conference on world wide web, pages 145–153. International World Wide Web Conferences Steering Committee. Jasper Oosterman, Alessandro Bozzon, Geert-Jan Houben, Archana Nottamkandath, Chris Dijkshoorn, Lora Aroyo, Mieke HR Leyssen, and Myriam C Traub. 2014. Crowd vs. experts: nichesourcing for knowledge intensive tasks in cultural heritage. In Proceedings of the 23rd International Conference on World Wide Web, pages 567–568. ACM. Endang Wahyu Pamungkas, Alessandra Teresa Cignarella, Valerio Basile, Viviana Patti, et al. 2018. 14-exlab@ unito for ami at ibereval2018: Exploiting lexical knowledge for detecting misogyny in english and spanish tweets. In 3rd Workshop on Evaluation of Human Language Technologies for Iberian Languages, IberEval 2018, volume 2150, pages 234–241. CEUR-WS. Lingyun Qiu and Izak Benbasat. 2010. A study of demographic embodiments of product recommendation agents in electronic commerce. International Journal of Human-Computer Studies, 68(10):669– 688. Rahat Ibn Rafiq, Homa Hosseinmardi, Richard Han, Qin Lv, Shivakant Mishra, and Sabrina Arredondo Mattson. 2015. Careful what you share in six seconds: Detecting cyberbullying instances in vine. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pages 617–622. ACM. Kelly Reynolds, April Kontostathis, and Lynne Edwards. 2011. Using machine learning to detect cyberbullying. In 2011 10th International Conference on Machine learning and applications and workshops, volume 2, pages 241–244. IEEE. 2829 Bj¨orn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, and Michael Wojatzki. 2017. Measuring the reliability of hate speech annotations: The case of the european refugee crisis. arXiv preprint arXiv:1701.08118. Carla Schieb and Mike Preuss. 2016. Governing hate speech by means of counterspeech on facebook. In 66th ica annual conference, at Fukuoka, Japan, pages 1–23. Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pages 1–10. Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2018. Cross-lingual transfer learning for multilingual task oriented dialog. arXiv preprint arXiv:1810.13327. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96. Association for Computational Linguistics. Sima Sharifirad, Borna Jafarpour, and Stan Matwin. 2018. Boosting text classification performance on sexist tweets by text augmentation and text generation using a combination of knowledge graphs. EMNLP 2018, page 107. Elena Shushkevich and John Cardiff. 2018. Classifying misogynistic tweets using a blended model: The ami shared task in ibereval 2018. In Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018), co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN 2018). CEUR Workshop Proceedings. CEUR-WS. org, Seville, Spain. Leandro Ara´ujo Silva, Mainack Mondal, Denzil Correa, Fabr´ıcio Benevenuto, and Ingmar Weber. 2016. Analyzing the targets of hate in online social media. In ICWSM, pages 687–690. Sara Owsley Sood, Elizabeth F Churchill, and Judd Antin. 2012. Automatic identification of personal insults on social news sites. Journal of the American Society for Information Science and Technology, 63(2):270–285. Rachele Sprugnoli, Stefano Menini, Sara Tonelli, Filippo Oncini, and Enrico Piras. 2018. Creating a whatsapp dataset to study pre-teen cyberbullying. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 51–59. Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic language variations to improve multilingual sentiment analysis in social media. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1815–1827. William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Media, pages 19–26. Association for Computational Linguistics. Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88–93. Lucas Wright, Derek Ruths, Kelly P Dillon, Haji Mohammad Saleem, and Susan Benesch. 2017. Vectors for counterspeech on twitter. In Proceedings of the First Workshop on Abusive Language Online, pages 57–62. Guang Xiang, Bin Fan, Ling Wang, Jason Hong, and Carolyn Rose. 2012. Detecting offensive tweets via topical feature discovery over a large scale twitter corpus. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 1980–1984. ACM. Haoti Zhong, Hao Li, Anna Cinzia Squicciarini, Sarah Michele Rajtmajer, Christopher Griffin, David J Miller, and Cornelia Caragea. 2016. Content-driven detection of cyberbullying on the instagram social network. In IJCAI, pages 3952– 3958.
2019
271
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2830–2840 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2830 Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts Alakananda Vempala Bloomberg LP [email protected] Daniel Preot¸iuc-Pietro Bloomberg LP [email protected] Abstract Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate. 1 Introduction Social media sites have traditionally been centered around publishing textual content. Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity. Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016). Thus, in addition to text, images have become key components of tweets. However, little is known about how textual content is related to the images with which they appear. For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or This is what happens when you lock your bike to a sign Awesome! (a) Image adds to the tweet meaning & Text is represented in image (b) Image adds to the tweet meaning & Text is not represented in image Tacos are the best Last exam turned in. No more juggling work + school + family + hobbies. Maybe now they’ll finally give me a BSc (c) Image does not add to meaning & Text is represented in image (d) Image does not add to meaning & Text is not represented in image Figure 1: Examples of the four types of text-image relationship from this study. can just provide commentary on the image content. Formalizing and understanding the relationship between the two modalities – text and images – is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018); b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op2831 timize screen space (see Figure 2). Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a) or by providing the context for understanding the text (Figure 1b); • In Figures 1(c,d), the image only illustrates what is expressed through text, without providing any additional information. Hence, in both of these cases, the text alone is sufficient to understanding the tweet’s key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c; • In Figures 1(b,d), the textual content is not represented in the image, with the text being either a comment on the image’s content (Figure 1b) or the image illustrating a feeling related to the text’s content. In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet. Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text - image relationship type;1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author’s demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type. 2 Related Work Task. The relationship between a text and its associated image was researched in a few prior studies. For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen1Data set is available at: https://github.com/ danielpreotiuc/text-image-relationship/ sions. Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap. Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content. Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative (Alikhani and Stone, 2019). Wang et al. (2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly. Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image. However, none of these studies propose any predictive methods for text-image relationship types. Alikhani et al. (2019) annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail. Chen et al. (2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content. They build models using the text and image content that predict the relationship type (Chen et al., 2015). We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet. Applications. Several applications require to be able to automatically predict the semantic textimage relationship in the data. Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets. Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018). Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017). Several resources of images paired 2832 (a) Full feed with all images displayed (b) Feed which hides images that do not add content Figure 2: Example of application using the image task classifier. Automatically collapsing images that do not add content beyond text optimizes screen real estate and allows users to view more tweets in their feed view. The end-user could open hidden images individually. with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014). However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter. Being able to classify this relationship can be useful for all above-mentioned applications. 3 Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity. The first task is centered on the role of the text to the tweet’s semantics, while the second focuses on the image’s role. The first task – referred to as the text task in the rest of the paper – focuses on identifying if there is semantic overlap between the context of the text and the image. This task is the defined using the following guidelines: 1. Some or all of the content words in the text are represented in the image (Text is represented) 2. None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a,c) (Text is represented) with Figures 1(b,d) (Text is not represented). The second task – referred to as the image task in the rest of the paper – focuses on the role of the image to the semantics of the tweet and aims to identify if the image’s content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party. This task is defined and annotated using the following guidelines: 1. Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text. 2. Image does not add additional content that represents the meaning of text+image (Image does not add). Examples for the image task can be seen in Figure 1 by comparing Figures 1(a,b) (Image adds) with Figures 1(c,d) (Image does not add). Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task). All of the four relationship types are exemplified in Figure 1. 2833 4 Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus. To the best of our knowledge, no such corpus exists in prior research. 4.1 Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011). It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010). The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research (Preot¸iuc-Pietro et al., 2017). This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups. We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits). We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time. We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012). In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis. Our final data set contains 4,471 tweets. 4.2 Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income. All users solicited for data collection were from the United States in order to limit cultural variation. • Gender was considered binary2 and coded with Female – 1 and Male – 0. All other variables are treated as ordinal variables. • Age is represented as a integer value in the 13– 90 year old interval. 2We asked users to report gender as either ‘Female’, ‘Male’ or an open-ended field, and removed the few users which did not select ‘Male’ or ‘Female’ • Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being ‘No high school degree’ (coded as 1) and the highest being ‘Advanced Degree (e.g., PhD)’ (coded as 6). • Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from ‘< $20,000’ to ‘> $200,000’). For a full description of the user recruitment and quality control processes, we refer the interested reader to (Preot¸iuc-Pietro et al., 2017). 4.3 Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower). We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other. For quality control, 10% of annotations were test questions annotated by the authors. Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid. Inter-annotator agreement is measured using Krippendorf’s Alpha. The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008). We collect 3 judgments and use majority vote to obtain the final label to further remove noise. For the text task, we collected and aggregated 5 judgments as the Krippendorf’s Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008). The latter task was more difficult due to requiring specific world knowledge (e.g. a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked. The aggregated judgments for each task were combined to obtain the four class labels. The label distributions of the aggregated annotations are: a) Text is represented & Image adds: 18.5%; b) Text is represented & Image does not add: 21.9%; c) Text is not represented & Image adds: 25.6%; d) Text is not represented & Image does not add: 33.8%. 2834 5 Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets. We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them. The methods we use are described in this section. 5.1 User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets. The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; Kosinski et al., 2013). We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author’s demographic traits. We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship. 5.2 Tweet Metadata We experiment with using the tweet metadata as features. These code if a tweet is a reply, tweet, like or neither. We also add as features the tweet like count, the number of followers, friends and posts of the post’s author and include them all in a logistic regression classifier. These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches. 5.3 Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship. We expect that certain textual cues will be specific to relationships even without considering the image content. For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image. Surface Features. We first use a range of surface features which capture more of the shallow stylistic content of the tweet. We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier. Bag of Words. The most common approach for building a text-based model is using bag-ofwords features. Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005). LSTM. Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network. The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014). The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014). 5.4 Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship. Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content. For example, images of people may be more likely to have in the text the names of those persons. To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set. ImageNet (Deng et al., 2009) is a visual database developed for object recognition research and consists of 1000 object types. In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015), which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models. ImageNet Classes. First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from InceptionNet. Then, we pass those features to a logistic regression classifier which is trained on our task. In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier. Tuned InceptionNet. Additionally, we tailored 2835 the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes. Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers. We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images. The two approaches to classification using image content based on pre-trained model on ImageNet have been used successfully in past research (Cinar et al., 2015). 5.5 Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task. Ensemble. A simple method for combining the information from both modalities is to build an ensemble classifier. This is done with a logistic regression model with two features: the Bag of Words text model’s predicted class probability and the Tuned InceptionNet model’s predicted class probability. The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models. LSTM + InceptionNet. We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes). The final output is our text-image relationship type. We use the Adam optimizer to fine tune this network. The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set. 6 Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments. Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set. Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5. The weighted F1 score is the weighted average of the class-level F1 scores, Method Image Task Text Task Image+Text Task Majority Baseline 0.37 0.44 0.16 User Demographics 0.39 0.45 0.17 Tweet Metadata 0.38 0.48 0.21 Text-based Methods Surface Features 0.39 0.53 0.21 Bag of Words 0.56 0.56 0.33 LSTM 0.60 0.57 0.33 Image-based Methods ImageNet Classes 0.67 0.52 0.33 Tuned InceptionNet 0.76 0.53 0.39 Joint Text-Image Methods Ensemble 0.76 0.53 0.39 LSTM + InceptionNet 0.81 0.58 0.44 Table 1: Experimental results in predicting text-image relationship with different methods and grouped by modalities used in prediction. Results are presented in weighted F1 score. where the weight is the number of items in each class. The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task. The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks. When the two tasks are combined, both feature types offer only a slight increase over the baseline. This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section. The models that use tweet text as features show consistent improvements over the baseline for all three tasks. The two models that use the tweet’s topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features. Both content based models obtain relatively similar performance, with the LSTM performing better on the image task. The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks. Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task. This is somewhat expected, as the retuning is performed on this domain specific task. When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet2836 ter performance on the text task. This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type. Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality. However, by jointly modelling both modalities, we are able to obtain improvements – especially on the image task. This shows that both types of information and their interaction are important to this task. Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work. We also observe that the predictive methods we described are better at classifying the image task. The analysis section below will allow us to uncover more about what type of content characterizes each relationship type. 7 Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage. 7.1 User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preot¸iuc-Pietro et al., 2015a,b, 2016; Preot¸iucPietro et al., 2017; Preot¸iuc-Pietro and Ungar, 2018) and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019). We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter. To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2. The independent variables indicate the average times with which the user employed a certain relationship type. We code this using six different variables: two representing the two broader tasks – the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image – and four encoding each combination between the two tasks. In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Preot¸iuc-Pietro et al., 2017; Holgate et al., 2018). When studying age and gender, we only use the other trait as the control. Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons. The results are presented in Table 2. We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits. The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117). Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet. This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could. Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text. These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context. This represents a more image-centric approach to the meaning of the tweet that is specific to younger users. These correlations are controlled for gender. Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302). This demonstrates the importance of controlling for such factors in this type of analysis. No effects were found with respect to gender or income. 2837 Trait Gender Age Education Income Image adds -0.002 0.019 0.014 -0.020 Text represented 0.034 0.117 0.046 -0.016 Image does not add & -0.031 -0.061 -0.049 0.025 Text not represented Image does not adds & 0.038 0.045 0.038 -0.004 Text represented Image adds & -0.004 -0.070 0.000 -0.009 Text not represented Image adds 0.001 0.095 0.016 -0.015 Text represented Table 2: Pearson correlation between user demographic traits and usage of the different text-image relationship types. All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons. Results for gender are controlled for age and vice versa. Results for education and income are controlled for age and gender. 7.2 Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2. However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level. Hence, we refrain from presenting and discussing any results using this feature group as significant. 7.3 Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship. We use univariate Pearson correlation where the independent variable is each feature’s normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively. When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013, 2017). The results when using unigrams as features are presented in Figure 3, 4 and 5. Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used. These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image. Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet. A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in relative frequency a a a correlation strength (a) Image adds (b) Image does not add Figure 3: Words specific of each of the two classes from the image task when compared to the other. (a) Text is represented (b) Text is not represented Figure 4: Words specific of each of the two classes from the text task when compared to the other. (a) Image does not add & Text not represented (b) Image does not add & Text represented (c) Image adds & Text not represented (d) Image adds & Text represented Figure 5: Words that are specific of each of the four classes compared to all other three classes. Font size is proportional to the Pearson correlation between each relationship type and word frequency. Color is proportional to the word frequency (see legend above the figures for reference). an image. Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description. The comparison between the two outcomes of the text task is presented in Figure 4. When the text and image semantically overlap, we observe words indicative of actions (i’ve), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image. We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden2838 tial elections took place). Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i’ve, saw, tell) present when the image provides facets not covered by text (Figure 5d). Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a). In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis (’...’), all often referencing the content of the image as identified through data inspection. References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet. Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects (Figure 5a) – miss, happy, lit, like) are specific of when the image does not add additional information. Through manual inspection of these images, they often display a meme (as in Figure 1d) or unrelated expressions to the text’s content. The image adds information when the text is not represented (Figure 5c) if the latter includes personal feelings, (me, i, i’m, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data. 8 Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set. The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it. We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone. We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen estate for users. Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements. We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images. Acknowledgements We like to thank our colleague Austin Ray for discussing the idea that originated this paper. We thank Ravneet Arora, Luka Bradesko, Prabhanjan Kambadur, Amanda Stent, Umut Topkara and the other members of the Bloomberg AI group who provided invaluable feedback on the experiments and paper. We also thank Eduardo Blanco for supporting the collaboration and feedback. References Malihe Alikhani, Sreyasi Nag Chowdhury, Gerard de Melo, and Matthew Stone. 2019. CITE: A Corpus of Image-Text Discourse Relations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL, pages 570–575. Malihe Alikhani and Matthew Stone. 2018. Exploring Coherence in Visual Explanations. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval, the First International Workshop on Multimedia Pragmatics, MIPR, pages 272–277. Malihe Alikhani and Matthew Stone. 2019. ‘Caption’ as a Coherence Relation: Evidence and Implications. In Proceedings of the Second Workshop on Shortcomings in Vision and Language, NAACL, pages 58– 67. Jalal S. Alowibdi, Ugo A. Buy, and Philip Yu. 2013. Language Independent Gender Classification on Twitter. ASONAM. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Buffer. 2016. What 1 Million Tweets Taught Us About How People Tweet Successfully. 2839 Tao Chen, Xiangnan He, and Min-Yen Kan. 2016. Context-aware Image Tweet Modelling and Recommendation. MM, pages 1018–1027. Tao Chen, Dongyuan Lu, Min-Yen Kan, and Peng Cui. 2013. Understanding and classifying image tweets. MM, pages 781–784. Tao Chen, Hany M SalahEldeen, Xiangnan He, MinYen Kan, and Dongyuan Lu. 2015. Velda: Relating an image tweet’s text and images. AAAI, pages 30– 36. Yagmur Gizem Cinar, Susana Zoghbi, and MarieFrancine Moens. 2015. Inferring User Interests on Social Media From Text and Images. In SoMeRa Workshop, ICDM. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A Large-Scale Hierarchical Image Database. CVPR, pages 248–255. Aleksandr Farseev, Liqiang Nie, Mohammad Akbari, and Tat-Seng Chua. 2015. Harvesting Multiple Sources for User Profile Learning: A Big Data Study. ICMR, pages 235–242. Yansong Feng and Mirella Lapata. 2010. How many Words is a Picture Worth? Automatic Caption Generation for News Images. ACL, pages 1239–1249. Sharath Chandra Guntuku, Weisi Lin, Jordan Carpenter, Wee Keong Ng, Lyle H Ungar, and Daniel Preot¸iuc-Pietro. 2017. Studying Personality through the Content of Posted and Liked Images on Twitter. Web Science, pages 223–227. Sharath Chandra Guntuku, Daniel Preotiuc-Pietro, Johannes C Eichstaedt, and Lyle H Ungar. 2019. What Twitter Profile and Posted Images Reveal About Depression and Anxiety. ICWSM. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural computation, 9(8):1735–1780. Eric Holgate, Isabel Cachola, Daniel Preot¸iuc-Pietro, and Junyi Jessy Li. 2018. Why Swear? Analyzing and Inferring the Intentions of Vulgar Expressions. EMNLP, pages 4405–4414. Mainak Jas and Devi Parikh. 2015. Image specificity. CVPR, pages 2727–2736. Akshay Java, Xiaodan Song, Tim Finin, and Belle Tseng. 2007. Why we twitter: understanding microblogging usage and communities. In Proceedings of the 9th WebKDD and 1st SNA-KDD 2007 workshop on Web mining and social network analysis, pages 56–65. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Michal Kosinski, David Stillwell, and Thore Graepel. 2013. Private Traits and Attributes are Predictable from Digital Records of Human Behavior. PNAS, 110(15). Efthymios Kouloumpis, Theresa Wilson, and Johanna D Moore. 2011. Twitter Sentiment Analysis: The Good the Bad and the OMG! ICWSM, pages 538–541. Vasileios Lampos, Nikolaos Aletras, Daniel Preot¸iucPietro, and Trevor Cohn. 2014. Predicting and Characterising User Impact on Twitter. EACL, pages 405–413. Chee Wee Leong and Rada Mihalcea. 2011. Measuring the semantic relatedness between words and images. In Proceedings of the Ninth International Conference on Computational Semantics, ACL, pages 185–194. Leqi Liu, Daniel Preot¸iuc-Pietro, Zahra Riahi, Mohsen E. Moghaddam, and Lyle Ungar. 2016. Analyzing Personality through Social Media Profile Picture Choice. ICWSM, pages 211–220. Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017. Knowing when to Look: Adaptive Attention via a Visual Sentinel for Image Captioning. CVPR, pages 375–383. Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf Language Identification Tool. ACL, pages 25–30. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. 2018. Exploring the Limits of Weakly Supervised Pretraining. ECCV, pages 185–201. Emily E Marsh and Marilyn Domas White. 2003. A Taxonomy of Relationships between Images and Text. Journal of Documentation, 59(6):647–672. Radan Martinec and Andrew Salway. 2005. A System for Image–Text Relations in New (and Old) Media. Visual Communication, 4(3):337–371. Margaret Mitchell, Xufeng Han, Jesse Dodge, Alyssa Mensch, Amit Goyal, Alex Berg, Kota Yamaguchi, Tamara Berg, Karl Stratos, and Hal Daum´e III. 2012. Midge: Generating Image Descriptions from Computer Vision Detections. EACL, pages 747–756. Seungwhan Moon, Leonardo Neves, and Vitor Carvalho. 2018. Multimodal Named Entity Disambiguation for Noisy Social Media Posts. ACL, pages 2000–2008. Vicente Ordonez, Girish Kulkarni, and Tamara L Berg. 2011. Im2text: Describing images using 1 million captioned photographs. NIPS, pages 1143–1151. Marco Pennacchiotti and Ana-Maria Popescu. 2011. A Machine Learning Approach to Twitter User Classification. ICWSM, pages 281–288. 2840 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. EMNLP, pages 1532–1543. Daniel Preot¸iuc-Pietro, Jordan Carpenter, Salvatore Giorgi, and Lyle Ungar. 2016. Studying the Dark Triad of Personality using Twitter Behavior. CIKM, pages 761–770. Daniel Preot¸iuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015a. An Analysis of the User Occupational Class through Twitter Content. ACL, pages 1754–1764. Daniel Preot¸iuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015b. Studying User Income through Language, Behaviour and Affect in Social Media. PLoS ONE. Daniel Preot¸iuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond Binary Labels: Political Ideology Prediction of Twitter Users. ACL, pages 729–740. Daniel Preot¸iuc-Pietro and Lyle Ungar. 2018. Developing User-Level Race and Ethnicity Predictors from Twitter Text. COLING, pages 1534–1545. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying Latent User Attributes in Twitter. SMUC. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised Modeling of Twitter Conversations. NAACL, pages 172–180. Zahra Riahi Samani, Sharath Chandra Guntuku, Mohsen Ebrahimi Moghaddam, Daniel Preot¸iucPietro, and Lyle H Ungar. 2018. Cross-platform and Cross-interaction Study of User Personality based on Images on Twitter and Flickr. PloS ONE, 13(7). H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, and Martin EP Seligman. 2013. Personality, Gender, and Age in the Language of Social Media: The Open-vocabulary Approach. PloS ONE, 8(9). H. Andrew Schwartz, Salvatore Giorgi, Maarten Sap, Patrick Crutchley, Johannes Eichstaedt, and Lyle Ungar. 2017. Dlatk: Differential language analysis toolkit. EMNLP, pages 55–60. Marcin Skowron, Marko Tkalˇciˇc, Bruce Ferwerda, and Markus Schedl. 2016. Fusing Social Media Cues: Personality Prediction from Twitter and Instagram. WWW Companion. Ionut Sorodoc, Jey Han Lau, Nikolaos Aletras, and Timothy Baldwin. 2017. Multimodal topic labelling. volume 2 of EACL, pages 701–706. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going Deeper with Convolutions. CVPR, pages 1–9. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. CVPR, pages 3156–3164. Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring User Political Preferences from Streaming Communications. ACL, pages 186–196. Zhiyu Wang, Peng Cui, Lexing Xie, Wenwu Zhu, Yong Rui, and Shiqiang Yang. 2014. Bilateral Correspondence Model for Words-and-Pictures Association in Multimedia-Rich Microblogs. ACM Transactions on Multimedia Computing, Communications, and Applications, 10(4):34:1–34:21. Semih Yagcioglu, Aykut Erdem, Erkut Erdem, and Nazli Ikizler-Cinbis. 2018. RecipeQA: A Challenge Dataset for Multimodal Comprehension of Cooking Recipes. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1358–1368. Quanzeng You, Sumit Bhatia, Tong Sun, and Jiebo Luo. 2014. The Eyes of the Beholder: Gender Prediction using Images Posted in Online Social Networks. ICDM. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67– 78. Hui Zou and Trevor Hastie. 2005. Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society, Series B.
2019
272
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2841–2847 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2841 Who Sides With Whom? Towards Computational Construction of Discourse Networks for Political Debates Sebastian Pad´o1, Andr´e Blessing1, Nico Blokker2, Erenay Dayanik1, Sebastian Haunss2, and Jonas Kuhn1 1IMS, University of Stuttgart, Germany 2SOCIUM, University of Bremen, Germany Abstract Understanding the structures of political debates (which actors make what claims) is essential for understanding democratic political decision-making. The vision of computational construction of such discourse networks from newspaper reports brings together political science and natural language processing. This paper presents three contributions towards this goal: (a) a requirements analysis, linking the task to knowledge base population; (b) a first release of an annotated corpus of claims on the topic of migration, based on German newspaper reports; (c) initial modeling results. 1 Introduction Democratic decision making can follow broadly two logics: In a technocratic, depoliticized mode, decision-making is carried out by administrative staff and experts. However, arguably most political decisions affecting large populations attract public attention and thus happen in a politicized mode, in which public debates accompany decision making (de Wilde, 2011; Z¨urn, 2014; Haunss and Hofmann, 2015). Understanding the structure and evolution of political debates is therefore essential for understanding democratic decision making. Recent innovations that combine political claims analysis (Koopmans and Statham, 1999) with network science under the name of discourse network analysis (Leifeld, 2016a) allow us to systematically analyze the dynamics of political debates based on the annotation of large newspaper corpora. So far, such studies have been carried out manually. In this paper, we outline the road towards using computational methods from natural language processing for the construction of discourse networks – working towards an integrated methodological framework for Computational Social Science. We make three contributions: (a) a requirements analysis; (b) a manually annotated corpus of claims from affiliation network actors claims actor network (discourse coalition) concept network (argumentative cluster) c1 c2 c3 c5 c4 a1 a2 a3 a5 a4 Figure 1: Actor, affiliation, and concept networks debates about migration found in German newspaper reports; (c) initial modeling results that already demonstrate the usefulness of computational methods in this context. 2 Discourse Networks: Actors and Claims Discursive interventions are one element among several that influence policy making (Schmidt and Radaelli, 2004). But the exact mechanisms of political discourse and under which condition discursive interventions do or do not translate into political decisions are largely unknown. At least there seems to be a general agreement that the formation and evolution of discourse coalitions is a core mechanism (Hajer, 1993; Sabatier and Weible, 2007). A discourse coalition can be generally defined as “a group of actors who share a social construct” (Hajer, 1993, p. 43). Political Claims Analysis (Koopmans and Statham, 1999) provides a framework in which claims, that is demands, proposals, criticisms, or decisions in the form of statements or collective actions reported in newspaper articles are attributed to (groups of) actors and are categorized. Actors and claims can be represented as the two classes of nodes in a bipartite affiliation network. In Figure 1, actors are circles, claims are squares, and they are linked by edges that indicate support (green) or opposition (orange). A discourse coali2842 tion is then the projection of the affiliation network on the actor side (dotted edges), while the projection on the concept side yields the argumentative clusters present in the debate. 3 NLP and Political Science Our analytical goals have connecting points with a range of activities in NLP. There has been considerable work in Social Media Analysis using NLP – in particular sentiment analysis (e.g. Ceron et al. 2014), but also going into fine-grained analysis of groups of users/actors (Cesare et al., 2017). Nevertheless, most analyses in social media concern typically relatively broad categories, such as party preferences (see Hong et al. 2016 for a comparison of social media and news texts). NLP techniques are also used for stance classification (e.g. Vilares and He 2017) and measuring ideology in speeches (Sim et al., 2013), and there is a fair amount of work on agenda-setting and framing (e.g. Tsur et al. 2015; Field et al. 2018). To our knowledge, finegrained distinctions both for actors and claims that are necessary for discourse network consideration (cf. Section 4) have not been explored in depth. Also related is the growing field of argumentation analysis/mining (e.g. Peldszus and Stede 2013; Swanson et al. 2015; Stab and Gurevych 2017). However, a core interest there is analyzing the argument structure of longer pieces of argumentative text (i.e., claims and their (recursive) justifications), whereas we focus on the core claims that actors put forward in news coverage. The aspect of dynamics in interaction among actors is shared with work on the extraction of actor/character networks from texts, which has been applied mostly to literary texts (Elson et al., 2010; Hassan et al., 2012; Iyyer et al., 2016). 4 Computational Construction of Discourse Networks Seen as an end-to-end task, the computational construction of affiliation networks from newspaper articles as introduced in Section 2 represents a task that combines binary relation extraction (Doddington et al., 2004; Hendrickx et al., 2010) with ontologization (Pennacchiotti and Pantel, 2006; Hachey et al., 2013, i.a.). The task can be decomposed conceptually as shown in Figure 2. From bottom to top, the first task is to identify claims and actors in the text (Tasks 1 and 2). Then, they need to be mapped onto entities that are represented in the affiliation Category A13: delay Brexit Task 2: actor detection Task 1: claim detection Task 4: claim mapping Task 5: claim attribution support Labour has said it will support the amendment Labor party A13 strong support Task 6: aggregation Labor party Task 3: actor mapping Figure 2: Construction of affiliation network construction (top) from text (bottom) as relation extraction graph, that is, discourse referents for actors (Task 3: entity linking) and categories for claims (Task 4). Next, claims need to be attributed to actors and classified as support or opposition (Task 5). Finally, relations need to be aggregated across documents (Task 6). This setup is related to Knowledge Base Population (McNamee et al., 2010) and presents itself as a series of rather challenging tasks: Actor and claim ontologies. The actors and claims can either be known a priori (then Tasks 3 and 4 amount to classification) or can emerge from the data (then they become clustering tasks). We assume that there is a limited set of claims that structures public debates on a given topic (Koopmans and Statham, 2010). We thus build on an expert-defined ontology of claims (cf. Section 5). With regard to actors, the issue is less clear: knowledge bases such as Wikidata cover many persons in the public eye. However new actors can appear and take on importance at any time. Discourse context. Tasks 3 and 4 regularly involves coreference resolution: in the example, the expression the amendment can only be mapped to the correct claim if its content can be inferred. Similarly, actors realized as pronouns have to be resolved. Coreference resolution is still a difficult problem (Martschat and Strube, 2014). Dependencies among tasks. The various tasks are clearly not independent of one another, and joint models have been developed for a subset of the tasks, such as coreference and relation detection (Almeida et al., 2014) or entity and relation classification (Miwa and Sasaki, 2014; Adel and Sch¨utze, 2017; Bekoulis et al., 2018). However, state-of-the-art models still struggle with sentence complexity, and there are no comprehensive models of the complete task including aggregation. 2843 C1: Steuerung von Migration (Controlling Migration) C2: Aufenthalt (Residency) C3: Integration (Integration) C4: Innere Sicherheit (Domestic Security) C5: Aussenpolitik (Foreign Policy) C6: ¨Okonomie, Arbeitsmarkt (Economy, Labor Market) C7: Gesellschaft (Society) C8: Verfahren (Procedures) Table 1: Migration: Main categories in claim ontology 5 Claim Ontology and Corpus Annotation We now demonstrate the first steps of computational discourse network construction in a concrete political context, namely the major topic of German politics of 2015: the domestic debate on (im-) migration precipitated by the war in Syria. Claim Ontology. Following established approaches to content analysis from political science (Leifeld, 2016b), we chose an approach that combines deductive and inductive elements to identify an initial set of topic-specific claim categories. First, we review the literature, extract relevant categories, and validate and extend them based on an initial sample of newspaper articles from Die Tageszeitung, a large left-leaning German quality newspaper (www.taz.de). This results in eight superordinate categories (cf. Table 1) and 89 subcategories, capturing a variety of different political positions. These categories and their definitions form the codebook that the annotation is based on.1 Annotation Process. Annotation follows a procedure successfully used by Haunss et al. (2013) in the analysis of the German nuclear phase-out debate (2011). The analysis of articles is carried out in double, independent annotation by trained student research assistants. An example of a text passage and its corresponding annotation is presented in the following sentence: (1) [Fl¨uchtlinge zum Erlernen der deutschen Sprache [...] verpflichten]Claim, will [die CDU in Niedersachsen]Actor. [Requiring refugees to learn the German language]Claim [...] is what [the CDU party in Lower Saxony]Actor wants. Annotators mark the claim and the actor, classify the claim as (a subtype of) C3, integration, link them, and mark the position (support/opposition). That is, Tasks 1–5 from Section 4 are all carried 1For the full codebook, see the supplementary material. Figure 3: Screenshot of annotation platform, with text (back) and annotation window (front) out. Crucially, cross-cutting (“multi-label”) claims can instantiate multiple categories. In our annotation, about 17% of all claims carry multiple labels. Frequent combinations at the top level are C2+C8 (procedural aspects of residency) and C1+C5 (international perspective on migration control). Building on experience and tool components from text annotation efforts in Digital Humanities projects (in particular the Center for Reflected Text Analytics, https://www.creta. uni-stuttgart.de/en/), we developed a web-based annotation tool, shown in Figure 3, which both streamlines annotation and encourages consistency. Annotation involves first marking claim and actor spans in the text and then selecting the correct categories for the claims and the correct referent for the actor from drop-down lists. See Blessing et al. (2019) for details. Reliability and Adjudication. We compute annotation reliability of the original student annotators for the two initial and most immediate annotation steps (cf. Figure 2), namely claim detection (Task 1) and classification (Task 4). For claim detection, a classical single-label classification task, we use Cohen’s Kappa: For each sentence, we compare whether the two annotators classified the sentence as part of a claim or not. We obtain a Kappa value of 0.58. For claim classification, a multi-label classification task, we cannot use Kappa. Instead, we compute Macro-F1 for all top level categories, and obtain an average F1 score of 63.5%. These numbers, while still leaving room for improvement, indicate moderate to substantial agreement among the student annotators. The two sets of annotations per document are subsequently reviewed and adjudicated by senior domain experts 2844 to create a reliable gold standard. Dataset Release. With this paper, we publicly release 423 fully annotated articles from the 2015 Tageszeitung. 179 articles contain at least one claim. In total, 982 Claims in 764 different text passages have been annotated. This includes additional information such as actor attributes (name, party membership, etc.), date and position. This dataset – together with documentation and annotation guidelines is available for research purposes at https://github.com/ mardy-spp/mardy_acl2019. Remaining Challenges. A number of challenges remain. A technical one is the identification of relevant documents: keyword-based methods turn out to be insufficient. A conceptual one is that not all decisions made in the design of the claim ontology hold up to broad-coverage annotation. Political science has defined the ideal of ‘multi-pass coding’ (Leifeld, 2016b) according to which the researcher constantly reviews and updates annotation in an iterative process, adding and collapsing categories as needed. We perform such updates at regular intervals, but they can only be meaningfully applied to the adjudicated gold standard, not individual annotations. Thus, our reliability is likely underestimated by the analysis above. 6 Modeling results Due to space restrictions, this paper only reports on first steps towards computational construction of discourse networks. Specifically, we present pilot models for Tasks 1 and 4 (claim identification and attribution), the two tasks for which we also presented reliability analyses in Section 5. Data setup. We randomly sampled 90% of our dataset for training and evaluate on the other 10%; the split is published with the dataset. We discarded articles with no claims. Claim Identification. We model claim identification as a sequence labeling task: The model labels each token in a sentence as B-Claim, I-Claim or Outside, adopting a BIO schema. We experiment with two model architectures. The first one is BERT (Devlin et al., 2018), a stateof-the-art transformer-based neural network model, which we fine-tune on our training data. The second is a current architecture for sequence labeling that consists of an embedding layer, an LSTM layer, and a CRF layer.2 We use word embeddings from FastText (Bojanowski et al., 2017). In order to add task and domain specific representations and resolve Out-Of-Vocabulary (OOV) problem, we experiment with a second embedding approach, namely learning character-based embeddings from which we compute word-level embeddings by feeding the character embeddings through a CNN and max-pooling the out. Depending on the experimental condition (see below), we use either just the word-based or a concatenation of the word-based and character-based embeddings, and train the embeddings on different corpora. All embeddings are fed to a bidirectional LSTM layer for contextualization. To jointly model the label sequence, we use a CRF layer on top. For a sequence with n words, we parameterize the distribution over all possible label sequences, Y, as p(y|d; W) = nQ i=1 φi (yi−1, yi, d) P y′∈Y nQ i=1 φi y′ i−1, y′ i, d  (1) where d = [d1, d2, . . . dn] is the set of representation produced by BiLSTM for each input word and φi (yi−1, yi, d) is a function calculating emission and transition potentials between the tags yi−1 and yi. During training, we maximize the loglikelihood function over the training set L(W) = X i log p(y|d; W) (2) During inference, the sequence with highest conditional probability is predicted by a Viterbi decoder: argmax y∈Y p(y|d; W) (3) Experimentally, we compared BERT against versions of our own model which (a) do and do not include the CRF layer; (b) do or do not use the character-level embeddings; (c) train embeddings on different corpora. We measure performance as F1 scores per-class, and macro F1 scores overall. We started with a simple model, (1), using the default Wikipedia FastText word-level embeddings and without CRF layer. Moving to in-domain TAZ embeddings, (2), improves performance by 4 points macro F1, with a slight further improvement of 0.5 points by adding character-level embeddings in 2See supplement for details and hyperparameters. 2845 Method B-C I-C O Macro (1) EmbWiki:w+BiLSTM 31.3 37.5 93.5 54.1 (2) EmbTAZ:w+BiLSTM 38.5 43.9 93.6 58.7 (3) EmbTAZ:w,c+BiLSTM 40.0 44.1 93.1 59.1 (4) EmbTAZ:w,c+BiLSTM+CRF 49.4 53.8 95.5 66.3 (5) EmbWiki:w,c+BiLSTM+CRF 35.1 39.1 90.6 55.0 (6) BERT 49.5 52.4 94.7 65.5 Table 2: Claim identification scores on evaluation set: F1 for BIO labels and Macro average (Mac). EmbCorpus:type: Corpus used to train embeddings and type (w: word, c: char). For example, EmbTAZ:w,c represents version for which character and word level embeddings trained on TAZ corpus. Method C1 C2 C3 C4 C5 C6 C7 C8 Macro NB 46 50 0 0 43 0 29 0 21 MLP 73 53 0 0 67 0 57 46 37 BiLSTM 71 71 0 0 63 0 78 24 38 BERT 44 82 54 29 50 0 53 57 46 Table 3: Claim categorization performance of several models. Columns C1-C8 show F1 score for each category. Macro reports macro average F1 score. NB: Naive Bayes, MLP: Multi-Layer Perceptron. (3). Adding a CRF layer to obtain the full model, (4), yields a further major increase by 7 points Fscore and results in the best overall model with 66.3 macro F1. This model also outperforms BERT, (6), numerically in macro F1 and for the two classes (I-C) and (O). This model still profits substantially from the in-domain embeddings: replacing them by Wikipedia-trained ones in model (5) results in a drop of 11 points. Claim Classification. For our experiments on claim classification, we assume that claims have already been detected. To each claim span, we assign one or more of the top categories from the claim ontology (cf. Section 5), i.e., we perform multi-class multi-label classification. In terms of models, we evaluate a fine-tuned version of BERT against three standard classification architectures: a unigram Naive Bayes model and Multi-Layer Perceptron and BiLSTM architectures based on TAZ-trained FastText embeddings that performed well in the previous experiment. All models perform multi-class classification by making a binary decision for each class. Table 3 shows the results, using the same F1 measures as before. BERT excels at this task, followed by the two embedding-based models; Naive Bayes comes last. Interestingly, the models differ in their performance across classes. BERT tends to make better predictions than the other models for small, homogeneous classes (C3: integration, C4: security) while MLP and BiLSTM do better on the larger and less clearly delineated classes (C1: migration control, C7: society). 7 Conclusion In this paper, we have sketched the way towards a Computational Social Science (CSS) framework for the construction of discourse networks (claims and actors) from news coverage of political debates, which has great potential for expanding the empirical basis for research in political science. The complexity of the scenario (fine-grained categories, multi-category claims, complex relations, aggregation) suggests that an attempt at automating the construction in its entirety is currently not realistic at a quality that makes it useful for political scientists. In the broader picture of a project that derives its motivation both from NLP and from CSS, scaling the computational component is an important objective, but one that should never come at the cost of reliability of the analytical components and methodological validity from the point of view of political science. A carefully laid out task analysis, as put forward in this paper, provides the basis for exploring more interactive “mixed methods” frameworks (see the discussion in Kuhn (to appear)): Computational models for a given set of claim categories can feed semi-automatic corpus annotation through manual post correction of predictions. Finally, an interleaved cross-disciplinary collaboration may support the future research process further: the claim ontology for a new field of debate could be constructed in a bootstrapping process, combining the political scientists’ analytical insights with (preliminary) predictions of computational seed models from partially overlapping fields. In our collaboration, systematic tool support has already made the process of codebook development considerably more effective. Acknowledgments We acknowledge funding by Deutsche Forschungsgemeinschaft (DFG) through MARDY (Modeling Argumentation Dynamics) within SPP RATIO and by Bundesministerium f¨ur Bildung und Forschung (BMBF) through Center for Reflected Text Analytics (CRETA). 2846 References Heike Adel and Hinrich Sch¨utze. 2017. Global normalization of convolutional neural networks for joint entity and relation classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1723–1729, Copenhagen, Denmark. Mariana S. C. Almeida, Miguel B. Almeida, and Andr´e F. T. Martins. 2014. A joint model for quotation attribution and coreference resolution. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 39–48, Gothenburg, Sweden. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Systems with Applications, 114:34– 45. Andr´e Blessing, Nico Blokker, Sebastian Haunss, Jonas Kuhn, Gabriella Lapesa, and Sebastian Pad´o. 2019. An environment for the relational annotation of political debates. In Proceedings of ACL System Demonstrations, Florence, Italy. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Andrea Ceron, Luigi Curini, Stefano M. Iacus, and Giuseppe Porro. 2014. Every tweet counts? how sentiment analysis of social media can improve our knowledge of citizens’ political preferences with an application to italy and france. New Media & Society, 16(2):340–358. Nina Cesare, Christan Grant, and Elaine Okanyene Nsoesie. 2017. Detection of user demographics on social media: A review of methods and recommendations for best practices. CoRR, abs/1702.01807. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. George R. Doddington, Alexis Mitchell, Mark A. Przybocki, Lance A. Ramshaw, Stephanie Strassel, and Ralph M. Weischedel. 2004. The automatic content extraction (ACE) program – tasks, data, and evaluation. In Proceedings of LREC, Lisbon, Portugal. David K Elson, Nicholas Dames, and Kathleen R McKeown. 2010. Extracting social networks from literary fiction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 138–147, Uppsala, Sweden. Anjalie Field, Doron Kliger, Shuly Wintner, Jennifer Pan, Dan Jurafsky, and Yulia Tsvetkov. 2018. Framing and agenda-setting in Russian news: a computational analysis of intricate political strategies. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 3570– 3580, Brussels, Belgium. Ben Hachey, Will Radford, Joel Nothman, Matthew Honnibal, and James R. Curran. 2013. Evaluating entity linking with Wikipedia. Artificial Intelligence, 194:130 – 150. Maarten A Hajer. 1993. Discourse Coalitions and the Institutionalization of Practice: The Case of Acid Rain in Britain. In Frank Fischer and John Forester, editors, The Argumentative Turn in Policy Analysis and Planning, pages 43–76. Duke University Press. Ahmed Hassan, Amjad Abu-Jbara, and Dragomir Radev. 2012. Extracting signed social networks from text. In Workshop Proceedings of TextGraphs7 on Graph-based Methods for Natural Language Processing, pages 6–14, Jeju, South Korea. Sebastian Haunss, Matthias Dietz, and Frank Nullmeier. 2013. Der Ausstieg aus der Atomenergie. Diskursnetzwerkanalyse als Beitrag zur Erkl¨arung einer radikalen Politikwende. Zeitschrift f¨ur Diskursforschung, 1(3):288–316. Sebastian Haunss and Jeanette Hofmann. 2015. Entstehung von Politikfeldern – Bedingungen einer Anomalie. dms – der moderne staat, 8(1):29–49. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the International Workshop on Semantic Evaluation, pages 33–38, Uppsala, Sweden. Lingzi Hong, Weiwei Yang, Philip Resnik, and Vanessa Fr´ıas-Mart´ınez. 2016. Uncovering topic dynamics of social media and news: The case of Ferguson. In Proceedings of Social Informatics, pages 240–256, Bellevue, WA. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daum´e III. 2016. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1534–1544, San Diego, CA. Ruud Koopmans and Paul Statham. 1999. Political Claims Analysis: Integrating Protest Event And Political Discourse Approaches. Mobilization, 4(2):203–221. Ruud Koopmans and Paul Statham. 2010. Theoretical Framework, Research Design, and Methods. In Ruud Koopmans and Paul Statham, editors, The Making of a European Public Sphere, pages 34–59. Cambridge University Press. 2847 Jonas Kuhn. to appear. Computational text analysis within the humanities: How to combine working practices from the contributing fields? Language Resources and Evaluation. Philip Leifeld. 2016a. Discourse Network Analysis: Policy Debates as Dynamic Networks. In Jennifer Nicoll Victor, Alexander H. Montgomery, and Mark Lubell, editors, The Oxford Handbook of Political Networks. Oxford University Press. Philip Leifeld. 2016b. Policy Debates as Dynamic Networks: German Pension Politics and Privatization Discourse. Campus Verlag, Frankfurt/New York. Sebastian Martschat and Michael Strube. 2014. Recall error analysis for coreference resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2070–2081, Doha, Qatar. Paul McNamee, Hoa Trang Dang, Heather Simpson, Patrick Schone, and Stephanie Strassel. 2010. An evaluation of technologies for knowledge base population. In Proceedings of the Seventh International Language Resources and Evaluation Conference, Valletta, Malta. Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1858–1869, Doha, Qatar. Andreas Peldszus and Manfred Stede. 2013. From argument diagrams to argumentation mining in texts: A survey. International Journal of Cognitive Informatics and Natural Intelligence, 7(1):1–31. Marco Pennacchiotti and Patrick Pantel. 2006. Ontologizing semantic relations. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 793–800, Sydney, Australia. Paul A. Sabatier and Cristopher M. Weible. 2007. The Advocacy Coalition Framework: Innovations and Clarifications. In Paul A. Sabatier, editor, Theories of the Policy Process, pages 189–220. Westview Press. Vivien A. Schmidt and Claudio M. Radaelli. 2004. Policy Change and Discourse in Europe: Conceptual and Methodological Issues. West European Politics, 27(2):183. Yanchuan Sim, Brice D. L. Acree, Justin H. Gross, and Noah A. Smith. 2013. Measuring ideological proportions in political speeches. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 91–101, Seattle, WA. Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. Computational Linguistics, 43(3):619–659. Reid Swanson, Brian Ecker, and Marilyn Walker. 2015. Argument mining: Extracting arguments from online dialogue. In Proceedings of the Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 217–226, Prague, Czech Republic. Oren Tsur, Dan Calacci, and David Lazer. 2015. A frame of mind: Using statistical models for detection of framing and agenda setting campaigns. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1629– 1638, Beijing, China. David Vilares and Yulan He. 2017. Detecting perspectives in political debates. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1573–1582, Copenhagen, Denmark. Pieter de Wilde. 2011. No Polity for Old Politics? A Framework for Analyzing the Politicization of European Integration. Journal of European Integration, 33(5):559–575. Michael Z¨urn. 2014. The politicization of world politics and its effects: Eight propositions. European Political Science Review, 6(01):47–71.
2019
273
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2848–2853 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2848 Analyzing Linguistic Differences between Owner and Staff Attributed Tweets Daniel Preot¸iuc-Pietro Bloomberg LP [email protected] Rita Devlin Marier Bloomberg LP [email protected] Abstract Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account. 1 Introduction Social media has become one of the main venues for breaking news that come directly from primary sources. Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions. However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner. This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account. Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages. Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015). However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person. No previous study has Figure 1: Example of a politician account where signed tweets are attributed to the account owner. looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account. Figure 1 shows an example of a U.S. politician who signs their tweets by adding ‘-PM’ at the end of the tweet. Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account. The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff. To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC. Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi1The data is available at: https://github.com/ danielpreotiuc/signed-tweets 2849 zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service. Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers. 2 Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017). A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011), age (Nguyen et al., 2011), geolocation (Cheng et al., 2010), political preference (Volkova et al., 2014; Preot¸iucPietro et al., 2017), income (Preot¸iuc-Pietro et al., 2015), impact (Lampos et al., 2014), socioeconomic status (Aletras and Chamberlain, 2018), race (Preot¸iuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preot¸iuc-Pietro et al., 2016). Related to our task is authorship attribution, where the goal is to predict the author of a given text. With few exceptions (Schwartz et al., 2013b), this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009). In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts. Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about. Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other. Pastiche detection is another related area of research (Dinu et al., 2012), where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics. 3 Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff. Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011). The rest of the tweets are implicitly attributed to their staff. Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets. We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed). We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts. We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts. We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200). We remove the retweets made by an account, as these are not attributed to either the account owner or their staff. This results in a data set with a total of 202,024 tweets. We manually identified each user’s signature from their profile description. To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression. We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial. In total, 9,715 tweets (4.8% of the total) are signed by the account owners. While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets. There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception. We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017). Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens. 2850 4 Features We use a broad set of linguistic features motivated by past research on user trait prediction (Preot¸iucPietro et al., 2015, 2017) in our attempt to predict and interpret the difference between owner and staff attributed tweets. These include: LIWC. Traditional psychology studies use a dictionary-based approach to representing text. The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories. Each message is thereby represented as a frequency distribution over these categories. Word2Vec Clusters. An alternative to LIWC is to use automatically generated word clusters. These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar. The clusters help reduce the feature space and provide good interpretability. We use the method by Preot¸iuc-Pietro et al. (2015) to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes. We present results using 200 topics as this gave the best predictive results. Each message is thus represented as an unweighted distribution over clusters. Sentiment & Emotions. We also investigate the extent to which tweets posted by the account owner express more or fewer emotions. The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise. We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment (Mohammad and Turney, 2010, 2013). Using these models, we assign sentiment and emotion probabilities to each message. Unigrams. We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total). We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users. Tweet Features. We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact). Although the latter features are not available in a real-time predictive scenario, they are useful for analysis. 5 Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data. Hence, we build predictive models and test them in two setups. First, we split the users into ten folds. Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users). In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets). We report the average performance across the ten folds. Due to class imbalance – only 4.8% of tweets are posted by the account owners – results are measured in ROC AUC, which is a more suitable metric in this setup. In our predictive experiments, we used logistic regression with Elastic Net regularization. As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined). The results using both experimental setups – holding-out tweets or users – are presented in Table 1. Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set. The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features. One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data. 2851 ROC AUC Feature Set Users Tweets Majority Class .500 .500 Tweet Features Length .619 .664 Tweet Type .654 .660 Post Time .554 .585 Impact .573 .718 LIWC .720 .724 W2V Clusters .676 .744 Sentiment & Emotions .568 .567 Unigrams .649 .857 Combined .741 .872 Table 1: Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC. Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users). 6 Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff. A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average. We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users. The data set is obtained as follows. Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets. We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner. Newer messages are preferred when sampling. This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815). We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al. (2013a). We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not. We correct for multiple comparisons using Simes correction. Top unigrams correlated with owner attributed tweets are presented in Table 3, with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2. Tweet feature results are presented in Table 4. LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, from, about .095 VERB is, are, be, have, will, has, thank, support Word2Vec Clusters Features r Top Words .079 great, thank, support, thanks, proud, good, everyone .049 led, speaker, charge, memory, universal, speakers .047 happy, wishing, birthday, wish, miss, wishes, lucky .042 their, families, protesc, children, communities, veterans .042 an, honor, win, congratulations, congrats, supporting .042 family, friends, old, mom, daughter, wife, father Sentiment & Emotion Features r Name Top Words .090 Positive join, proud, working, good, happy .038 Negative tax, fight, fighting, small, violence, gun Table 2: Pearson correlations of group features (maximum six per type) with owner attributed tweets. No features are significantly correlated with staff attributed tweets. All correlations are significant at p < .01, twotailed t-test, Simes corrected. Token r Token r . .102 & .049 to .081 I .045 offer .071 ” .045 my .070 prayers .043 and .060 a .042 for .065 you .042 leadership .061 in .040 the .057 your .040 of .054 our .039 , .0511 thank .039 all .050 have .038 Table 3: Unigrams with the highest Pearson correlations to owner tweets. No unigrams are significantly correlated with staff attributed tweets. All correlations are significant at p < .01, two-tailed t-test, Simes corrected. Feature µ Owner µ Staff # Chars 105.4 102.4 # Tokens 23.2 21.4 Contains URL 45.7% 73.9% @-reply 4.2% 9.5% Sent on Weekends 23.5% 20.7% # Retweets 29.4 38.0 # Likes 82.3 79.1 Table 4: Mean values of tweet features in owner and staff attributed tweets. All differences between means shown in this table are significant at p < .001, MannWhitney U test, Simes corrected. 2852 Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure. Tweets of congratulations, condolences and support are also specific of signed tweets. These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets. Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images. Remarkably, there are no textual features significantly correlated with staff attributed tweets. An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories. 7 Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers. Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account. Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers. Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC. Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content. References Nikolaos Aletras and Benjamin Paul Chamberlain. 2018. Predicting Twitter User Socioeconomic Attributes with Network and Language Information. In Proceedings of the 29th on Hypertext and Social Media, HT, pages 20–24. D. John Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating Gender on Twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1301–1309. Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you Tweet: A Content-Based Approach to Geo-Locating Twitter Users. In Proceedings of the 19th ACM Conference on Information and Knowledge Management, CIKM, pages 759– 768. Munmun De Choudhury, Nicholas Diakopoulos, and Mor Naaman. 2012. Unfolding the event landscape on twitter: classification and exploration of user categories. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, CSCW, pages 241–244. Liviu P Dinu, Vlad Niculae, and Octavia-Maria S¸ulea. 2012. Pastiche detection based on stopword rankings: exposing impersonators of a romanian writer. In Proceedings of the Workshop on Computational Approaches to Deception Detection, pages 72–77. Paul Ekman. 1992. An Argument for Basic Emotions. Cognition & Emotion, 6(3-4):169–200. Patrick Juola et al. 2008. Authorship attribution. Foundations and Trends R⃝in Information Retrieval, 1(3):233–334. Moshe Koppel, Jonathan Schler, and Shlomo Argamon. 2009. Computational methods in authorship attribution. Journal of the Association for Information Science and Technology, 60(1):9–26. Vasileios Lampos, Nikolaos Aletras, Daniel Preot¸iucPietro, and Trevor Cohn. 2014. Predicting and Characterising User Impact on Twitter. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL, pages 405–413. Ulrike von Luxburg. 2007. A Tutorial on Spectral Clustering. Statistics and Computing, 17(4):395– 416. Sunghwan Mac Kim, Qiongkai Xu, Lizhen Qu, Stephen Wan, and C´ecile Paris. 2017. Demographic Inference on Twitter using Recursive Neural Networks. volume 2 of ACL, pages 471–477. James McCorriston, David Jurgens, and Derek Ruths. 2015. Organizations are users too: Characterizing and detecting the presence of organizations on twitter. ICWSM, pages 650–653. Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. 2013. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2013 annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL, pages 746–751. Saif M. Mohammad and Peter D. Turney. 2010. Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon. In Proceedings of the Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, NAACL, pages 26–34. 2853 Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a Word-Emotion Association Lexicon. Computational Intelligence, 29(3):436–465. Dong Nguyen, Noah A Smith, and Carolyn P Ros´e. 2011. Author age prediction from text using linear regression. In Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, ACL, pages 115–123. Marco Pennacchiotti and Ana-Maria Popescu. 2011. A Machine Learning Approach to Twitter User Classification. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, ICWSM, pages 281–288. James W. Pennebaker, Roger J. Booth, Ryan L. Boyd, and Martha E. Francis. 2015. Linguistic Inquiry and Word Count: LIWC2015. Austin, TX: Pennebaker Conglomerates. James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic Inquiry and Word Count. Mahway: Lawrence Erlbaum Associates. Politico. 2017. Trump credits social media for his election. https://www. politico.com/story/2017/10/20/ trump-social-media-election-244009. Marius Popescu and Liviu P Dinu. 2007. Kernel Methods and String Kernels for Authorship Identification: The federalist Papers Case. In Proceedings of the 2007 International Conference Recent Advances in Natural Language Processing, RANLP. Daniel Preot¸iuc-Pietro, Jordan Carpenter, Salvatore Giorgi, and Lyle Ungar. 2016. Studying the Dark Triad of Personality using Twitter Behavior. In Proceedings of the 25th ACM Conference on Information and Knowledge Management, CIKM, pages 761– 770. Daniel Preot¸iuc-Pietro, Ye Liu, Daniel J. Hopkins, and Lyle Ungar. 2017. Beyond Binary Labels: Political Ideology Prediction of Twitter Users. In Proceedings of the 55th Conference of the Association for Computational Linguistics, ACL, pages 729–740. Daniel Preot¸iuc-Pietro and Lyle Ungar. 2018. UserLevel Race and Ethnicity Predictors from Twitter Text. In Proceedings of the 27th International Conference on Computational Linguistics, COLING, pages 1534–1545. Daniel Preot¸iuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015. Studying User Income through Language, Behaviour and Affect in Social Media. PLoS ONE, 10(9). Daniel Preot¸iuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond Binary Labels: Political Ideology Prediction of Twitter Users. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL, pages 729–740. Daniel Preot¸iuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015. Studying user income through language, behaviour and affect in social media. PloS one, 10(9):e0138717. David Robinson. 2016. Text analysis of Trump’s tweets confirms he writes only the (angrier) Android half. http://varianceexplained.org/r/ trump-tweets/. H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, and Martin EP Seligman. 2013a. Personality, Gender, and Age in the Language of Social Media: The Open-vocabulary Approach. PloS ONE, 8(9). H. Andrew Schwartz, Salvatore Giorgi, Maarten Sap, Patrick Crutchley, Johannes Eichstaedt, and Lyle Ungar. 2017. DLATK: Differential Language Analysis ToolKit. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP, pages 55–60. Roy Schwartz, Oren Tsur, Ari Rappoport, and Moshe Koppel. 2013b. Authorship attribution of micromessages. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1880–1891. Jianbo Shi and Jitendra Malik. 2000. Normalized Cuts and Image Segmentation. Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905. Efstathios Stamatatos. 2009. A survey of modern authorship attribution methods. Journal of the Association for Information Science and Technology, 60(3):538–556. Carlo Strapparava and Rada Mihalcea. 2008. Learning to Identify Emotions in Text. In Proceedings of the 2008 ACM Symposium on Applied Computing, pages 1556–1560. Carlo Strapparava, Alessandro Valitutti, et al. 2004. WordNet Affect: an Affective Extension of WordNet. In Proceedings of the Fourth International Conference on Language Resources and Evaluation, volume 4 of LREC, pages 1083–1086. Time. 2011. Obama Is Actually Writing His Own Tweets Now. http://techland.time.com/2011/06/20/obama-isactually-writing-his-own-tweets-now/. Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring User Political Preferences from Streaming Communications. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL, pages 186–196.
2019
274
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2854–2859 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2854 Exploring Author Context for Detecting Intended vs Perceived Sarcasm Silviu Vlad Oprea School of Informatics University of Edinburgh Edinburgh, United Kingdom [email protected] Walid Magdy School of Informatics University of Edinburgh Edinburgh, United Kingdom [email protected] Abstract We investigate the impact of using author context on textual sarcasm detection. We define author context as the embedded representation of their historical posts on Twitter and suggest neural models that extract these representations. We experiment with two tweet datasets, one labelled manually for sarcasm, and the other via tag-based distant supervision. We achieve state-of-the-art performance on the second dataset, but not on the one labelled manually, indicating a difference between intended sarcasm, captured by distant supervision, and perceived sarcasm, captured by manual labelling. 1 Introduction Sarcasm is a form of irony that occurs when there is a discrepancy between the literal meaning of an utterance and its intended meaning. This discrepancy is used to express a form of dissociative attitude towards a previous proposition, often in the form of contempt or derogation (Wilson, 2006). Sarcasm is omnipresent on the social web and can be highly disruptive of systems that harness this data (Maynard and Greenwood, 2014). It is therefore imperative to devise model for textual sarcasm detection. The effectiveness of such models depends on the quality of labelled data used for training. Two methods are commonly used to label texts for sarcasm: manual labelling by human annotators; and tag-based distant supervision. In the latter, texts are considered sarcastic if they contain specific tags, such as #sarcasm and #sarcastic. Most work on computational sarcasm detection extracts lexical and pragmatic cues available in the text being classified (Campbell and Katz, 2012; Riloff et al., 2013; Joshi et al., 2016; Tay et al., 2018). However, sarcasm is a contextual phenomenon and detecting it often requires prior information about the author, audience and previous interactions between them, that originates beyond the text itself (Rockwell and Theriot, 2001a). In this work we investigate the impact of author context on the current sarcastic behaviour of the author. We identify author context with the embedded representation of their historical tweets. We use the term user to refer to the author of a tweet and the phrase user embedding to refer to such a representation. Given a tweet t posted by user ut with user embedding et, we address two questions: (1) Is et predictive of the sarcastic nature of t? (2) Is the predictive power of et on the sarcastic nature of t the same if t is labelled via manual labelling vs distant supervision? To our knowledge, previous research that considers author context (Rajadesingan, Zafarani, and Liu, 2015; Bamman and Smith, 2015; Amir et al., 2016; Hazarika et al., 2018) only experiments on distant supervision datasets. We experiment on datasets representative of both labelling methods, namely Riloff (Riloff et al., 2013), labelled manually, and Ptacek (Pt´aˇcek, Habernal, and Hong, 2014), labelled via distant supervision. We suggest neural models to build user embeddings and achieve state-of-the-art results on Ptacek, but not on Riloff. Comparing and analyzing the discrepancy, our findings indicate a difference between the sarcasm that is intended by the author, captured by distant supervision, represented in Ptacek, and sarcasm that is perceived by the audience, captured by manual labelling, represented in Riloff. This difference has been highlighted by linguistic and psycholinguistic studies in the past (Rockwell and Theriot, 2001b; Pexman, 2005), being attributed to socio-cultural differences between the author and the audience. However, up to our knowledge, it has not been considered in the context of sarcasm detection so far. Our work suggests a future research direction in sarcasm detection where the two types of sarcasm are treated as separate phenomena and socio-cultural 2855 differences are taken into account. 2 Background 2.1 Sarcasm Detection Based on the information considered when classifying a text as sarcastic or non-sarcastic, we identify two classes of models across literature: local models and contextual models. Local Models Local models only consider information available within the text being classified. Most work in this direction considers linguistic incongruity (Campbell and Katz, 2012) to be a marker of sarcasm. Riloff et al. (2013) consider a positive verb used in a negative sentiment context to indicate sarcasm. Joshi et al. (2016) use the cosine similarity between embedded representations of words. Recent work attempts to capture incongruity using a neural network with an intraattention mechanism (Tay et al., 2018). Contextual Models Contextual models utilize both local and contextual information. There is a limited amount of work in this direction. Wallace, Choe, and Charniak (2015), working with Reddit data, include information about the forum type where the post to be classified was posted. For Twitter data, Rajadesingan, Zafarani, and Liu (2015) and Bamman and Smith (2015) represent user context by a set of manually-curated features extracted from their historical tweets. Amir et al. (2016) merge all historical tweets of a user into one historical document and use the Paragraph Vector model (Le and Mikolov, 2014) to build a representation of that document. Building on their work, Hazarika et al. (2018) extract in addition personality features from the historical document. Despite reporting encouraging results, these models are only tested on datasets labelled via distant supervision. In our work, we compare the performance of our models when tested on datasets representative of both manual annotation and distant supervision. 2.2 Intended vs Perceived Sarcasm Dress et al. (2008) notice a lack of consistence in how sarcasm is defined by people of different socio-cultural backgrounds. As a result, an utterance that is intended as sarcastic by its author might not be perceived as such by audiences of different backgrounds (Rockwell and Theriot, 2001a). When a tweet is sarcastic from the perspective of its author, we call the resulting phenomenon intended sarcasm. When it is sarcastic from the perspective of an audience member, we call the phenomenon perceived sarcasm. 3 Sarcasm Datasets We test our models on two popular tweet datasets, one labelled manually and the other via distant supervision. 3.1 Riloff dataset The Riloff dataset consists of 3,200 tweet IDs. These tweets were manually labeled by third party annotators. The labels capture the subjective perception of the annotators (perceived sarcasm). Three separate labels were collected for each tweet and the dominant one was chosen as the final label. We attempted to collect the corresponding tweets using the Twitter API1, as well as the historical timeline tweets for each user, to be used later for building user embeddings. For a user with tweet t in Riloff, we collected those historical tweets posted before t. Only 701 original tweets, along with the corresponding user timelines, could be retrieved. Others have either been removed from Twitter, the corresponding user accounts have been disabled, or the API did not retrieve any historical tweets. Table 1 shows the label distribution across this dataset. We divided the dataset into ten buckets, using eight for training, one for validation and one for testing. The division into buckets is stratified by users, i.e. all tweets from a user end up in the same bucket. Stratification makes sure any specific embedding is only used during training, during validation, or during testing. We further ensured the overall class balance is represented in all of the three sets. Table 1 shows the size of each set. 3.2 Ptacek dataset The Ptacek dataset consists of 50,000 tweet IDs labelled via distant supervision. Tags used as markers of sarcasm are #sarcasm, #sarcastic, #satire and #irony. This dataset reflects intended sarcasm, since the original poster tagged their own tweet as sarcastic through the hashtag. In a similar setting as with Riloff we could only collect 27,177 tweets and corresponding time1https://developer.twitter.com 2856 dataset size sarcastic non-sarcastic train valid test Riloff 701 192 509 551 88 62 Ptacek 27,177 15,164 12,013 21,670 2,711 2,797 Table 1: Label distribution across our datasets; and distribution into train, validation and test sets. lines. We divided them into ten buckets and stratified by users. During preprocessing we removed all sarcasm-marking tags from both the training tweets and the historical tweets. Table 1 shows statistics on both datasets. 4 Contextual Sarcasm Detection Models Let T be a set of tweets. For any t ∈T, let ut be the user who posted tweet t. Let ht be a set of historical tweets of user ut, posted before t, with ht ∩T = ∅and let et be the embedding of user ut, i.e. a vector representation of ht. Let Y = {sarcastic, non-sarcastic} be the output space. Our goal is to find a model m : {(t, et)|t ∈T} →Y . As a baseline, we implement the SIARN (Single-Dimension Intra-Attention Network) model proposed by (Tay et al., 2018), since it achieves the best published results on both our datasets. SIARN only looks at the tweet being classified, that is SIARN(t, et) = m′(t). Further, we introduce two classes of models: exclusive and inclusive models. In exclusive models, the decision whether t ∈T is sarcastic or not is independent of t, i.e. m(t, et) = m′(et). The content of the tweet being classified is not considered, prediction being based solely on user historical tweets. The architecture of such a model is shown in Figure 1. We feed the user embedding et to a layer with softmax activations to output a probability distribution over Y . We name these models EX-[emb], where [emb] is the name of the user embedding model. Inclusive models account for both t and et, as shown in Figure 1. We start with the feature vector ft extracted by SIARN from t. We then concatenate ft with et and use an output layer with softmax activations. We name these models IN[emb], where [emb] is the user embedding model. We now look at several user embedding models that build et for a user ut as a representation of ht. Recall that ∀u ∈usr(T) : hist(u)∩T = ∅, where usr(T) is the image of T under usr. CASCADE Embeddings Up to our knowledge, the user embedding model that has proven most informative in a sarcasm detection pipeline so far is CASCADE (Hazarika et al., 2018). However, it has only been tested on a dataset of Reddit2 posts labelled via distant supervision. We test it on our datasets. Following original authors, we merge all tweets from ht in a single document dt, giving corpus C = {dt|t ∈T}. Using the Paragraph Vector model (Le and Mikolov, 2014) we generate a representation vt of dt. Next, we feed dt to a neural network pre-trained on the personality detection corpus released by Matthews and Gilliland (1999), which contains labels for the Big-Five personality traits (Goldberg, 1993). We merge the resulting hidden state pt of the network with vt using Generalized Canonical Correlation Analysis (GCCA) as described by Hazarika et al. (2018) to get et. W-CASCADE Embeddings CASCADE treats all historical tweets in the same manner. However, as studies in cognitive psychology argue (Kellogg, 2001), long-term working memory plays an important role in verbal reasoning and textual comprehension. We therefore expect recent historical tweets to have a greater influence on the current behaviour of a user, compared to older ones. To account for this, we suggest the following model that accounts for the temporal arrangement of historical tweets. We first use CASCADE to build vt r and pt r, and to merge them into et r using GCCA, ∀r ∈ht. We then divide the sequence ⟨et r1, et r2, . . . , et r|ht|⟩into ten contiguous partitions and multiply each vector with the index of the partition it belongs to. That is, we multiply et ri by i % |ht| + 1, where % is the modulus operator. By convention, the tweet with the highest index is the most recent one. Finally, we sum the resulting vectors and normalize the result to get et. ED Embeddings One of the main advantages of the encoder-decoder model (Sutskever, Vinyals, and Le, 2014), commonly used for sequence prediction tasks, is its ability to handle inputs and outputs of variable length. The encoder, a recurrent network, transforms an input sequence into an internal representation of fixed dimension. The decoder, another recurrent network, generates an 2https://www.reddit.com 2857 2XWSXWZLWKVRIWPD[ DFWLYDWLRQIXQFWLRQV 3 QRQVDUFDVWLF  3 VDUFDVWLF W WZHHW EHLQJFODVVL¿HG 6,$51 EDVHOLQHPRGHO IW IHDWXUHYHFWRU ,QFOXVLYH eW (embedding of user uW representing hW) Exclusive KW KLVWRULFDOWZHHWVRIXVHUXW ZKHUHWLVWKHWZHHWEHLQJFODVVLILHG  user embedding model CASCADEW-CASCADE ('RU6800$5< Figure 1: The architecture of the models used. Exclusive models do not use the current tweet being classified, prediction being based solely on user history. Inclusive models use both user history and the current tweet. output sequence using this representation. We use bi-directional LSTM cells (Schuster and Paliwal, 1997) and identify et ri, 1 ≤i ≤|ht|, with the internal state of the encoder after feeding in ri. The training objective is to reconstruct the input ri. We employ the same weighting technique as we did for W-CASCADE to construct et. SUMMARY Embeddings We use an encoderdecoder model as in the previous paragraph, but change the objective from reconstructing the input to summarizing it. We pre-train the model on the Gigaword standard summarization corpus3. 5 Effect of Context on Sarcasm Detection 5.1 Experimental Setup We filter out all tweets shorter than three words and replace all words that only appear once in the entire corpus with an UNK token. Then, we encode each tweet as a sequence of word vectors initialized using GloVe embeddings (Pennington, Socher, and Manning, 2014). Following the authors SIARN, our baseline, we set the word embedding dimension to 100. We tune the dimension of all CASCADE embeddings to 100 on the validation set. For comparability, we set W-CASCADE embeddings to the same dimension. For CASCADE embeddings we make use of the implementation available at https:// github.com/SenticNet/cascade. When training ED and SUMMARY, our decoder implements attention over the input vectors. We use the general global attention mechanism suggested by Luong, Pham, and Manning (2015). We imple3https://github.com/harvardnlp/ sent-summary Model Riloff Ptacek SIARN (baseline) 0.711 0.863 exclusive EX-CASCADE 0.457 0.802 EX-W-CASCADE 0.478 0.922 EX-ED 0.546 0.873 EX-SUMMARY 0.492 0.845 inclusive IN-CASCADE 0.723 0.873 IN-W-CASCADE 0.714 0.934 IN-ED 0.739 0.887 IN-SUMMARY 0.679 0.892 Table 2: F1 score achieved on the Riloff and Ptacek datasets for both exclusive and inclusive models. Best results for each model class are highlighted in bold. Model Riloff #Riloff EX-CASCADE 0.457 0.818 EX-W-CASCADE 0.478 0.797 EX-ED 0.545 0.827 EX-SUMMARY 0.492 0.772 Table 3: F1 score achieved by the exclusive models on the #Riloff dataset, compared to Riloff dataset. Best results are highlighted in bold. ment both ED and SUMMARY using the OpenNMT toolkit (Klein et al., 2017). For comparability with SIARN, our baseline, we follow its authors in setting a batch size of 16 for the Riloff dataset, and of 512 for the Ptacek dataset, and in training for 30 epochs using the RMSProp optimizer (Tieleman and Hinton, 2012) with a learning rate of 0.001. Our code and data can be obtained by contacting us. 5.2 Results All results are reported in Table 2. User embeddings show remarkable predictive power on the Ptacek dataset. In particular, using the EX-W2858 with tag without any tag labelled sarcastic 190 2 labelled non-sarcastic 217 292 Table 4: Disagreement between manual labels and the presence of sarcasm tags in the Riloff dataset, as discussed in Section 5.3. CASCADE model, we get better results (f1-score 0.922) than the baseline (f1-score 0.863) without even looking at the tweet being predicted. On the Riloff dataset, however, user embeddings seem to be far less informative, with EX-W-CASCADE yielding an f1-score of only 0.478. Out of the exclusive models, we get the highest f1-score of 0.546 using EX-ED on Riloff. By contrast we get 0.873 on Ptacek using EX-ED. The state-of-the-art performance of exclusive models on Ptacek indicate that users seem to have a prior disposition to being either sarcastic or nonsarcastic, which can be deduced from historical behaviour. However, this behaviour can change over time, as we achieve better performance when accounting for the temporal arrangement of historical tweets, as we do in W-CASCADE. On the Riloff dataset the performance of exclusive models is considerably lower. In the following, we investigate the possible reasons for this large difference in performance between the two datasets. 5.3 Performance Analysis Riloff dataset is annotated manually, which might not reflect the intention of the users, but rather the subjective perception of the annotators. In this light, we could expect user embeddings to have poor predictive power. Perhaps annotator embeddings would shed more light. We noticed that many of the tweets in Riloff contain one or more of the tags that were used to mark sarcasm in Ptacek. For all tweets in Riloff, we checked the agreement between containing such a tag, and being manually annotated as sarcastic. The results are shown in Table 4. Note that the statistics shown are not for the entire dataset as published by Riloff et al. (2013), but for the subset of tweets coming from users without blocked profiles and from which we could gather historical tweets, as discussed in Section 3. We notice a large disagreement. In particular, 217 out of the 509 tweets that were annotated manually as non-sarcastic contained such a tag. The lack of coherence between the presence of sarcasm tags and manual annotations in the Riloff dataset suggests that the two labelling methods capture distinct phenomena, considering the subjective nature of sarcasm. Previous research in linguistics and psycholinguistics (Rockwell and Theriot, 2001b; Pexman, 2005) attributes this difference to sociocultural differences between the author and the audience and shows that the difference persists even when contextual information is provided. To investigate further, we re-labelled the Riloff dataset via distant supervision considering these tags as markers of sarcasm, to create the #Riloff dataset. We applied the exclusive models on #Riloff and noticed a considerably higher predictive power than on Riloff. Results are reported in Table 3. Author history seems therefore predictive of authorial sarcastic intention, but not of external perception. This could indicate that future work should differentiate between the two types of sarcasm: intended and perceived. Both are important to detect, for applications such as opinion mining for the former and hate speech detection for the latter. 6 Conclusion We studied the predictive power of user embeddings in textual sarcasm detection across datasets labelled via both manual labelling and distant supervision. We suggested several neural models to build user embeddings, achieving state-of-the-art results for distant supervision, but not for manual labelling. We account for discrepancy by reference to the different type of sarcasm captured by the two labelling methods, attributed by previous research in linguistics and psycholinguistics (Rockwell and Theriot, 2001b; Pexman, 2005) to socio-cultural differences between the author and the audience. We suggest a future research direction in sarcasm detection where the two types of sarcasm are treated as separate phenomena and socio-cultural differences are taken into account. 7 Acknowledgements This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1); the University of Edinburgh; and The Financial Times. 2859 References Amir, S.; Wallace, B. C.; Lyu, H.; Carvalho, P.; and Silva, M. J. 2016. Modelling context with user embeddings for sarcasm detection in social media. In CoNLL, 167–177. ACL. Bamman, D., and Smith, N. A. 2015. Contextualized sarcasm detection on twitter. In ICWSM, 574–577. AAAI Press. Campbell, J. D., and Katz, A. N. 2012. Are there necessary conditions for inducing a sense of sarcastic irony? Discourse Processes 49(6):459–480. Dress, M. L.; Kreuz, R. J.; Link, K. E.; and Caucci, G. M. 2008. Regional variation in the use of sarcasm. JLS 27(1):71–85. Goldberg, L. R. 1993. The structure of phenotypic personality traits. American psychologist 48(1):26. Hazarika, D.; Poria, S.; Gorantla, S.; Cambria, E.; Zimmermann, R.; and Mihalcea, R. 2018. Cascade: Contextual sarcasm detection in online discussion forums. In COLING, 1837–1848. ACL. Joshi, A.; Tripathi, V.; Patel, K.; Bhattacharyya, P.; and Carman, M. 2016. Are word embedding-based features useful for sarcasm detection? In EMNLP, 1006–1011. ACL. Kellogg, R. T. 2001. Long-term working memory in text production. Memory & Cognition 29(1):43–52. Klein, G.; Kim, Y.; Deng, Y.; Senellart, J.; and Rush, A. 2017. OpenNMT: Open-source toolkit for neural machine translation. In ACL, 67–72. ACL. Le, Q., and Mikolov, T. 2014. Distributed Representations of Sentences and Documents. In ICML, 1188– 1196. PMLR. Luong, T.; Pham, H.; and Manning, C. D. 2015. Effective approaches to attention-based neural machine translation. In EMNLP, 1412–1421. ACL. Matthews, G., and Gilliland, K. 1999. The personality theories of H.J. Eysenck and J.A. Gray: a comparative review. Personality and Individual Differences 26(4):583–626. Maynard, D., and Greenwood, M. 2014. Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis. In LREC. ELRA. Pennington, J.; Socher, R.; and Manning, C. 2014. Glove: Global vectors for word representation. In EMNLP, 1532–1543. ACL. Pexman, P. M. 2005. Social Factors in the Interpretation of Verbal Irony: The Roles of Speaker and Listener Characteristics. Pt´aˇcek, T.; Habernal, I.; and Hong, J. 2014. Sarcasm detection on czech and english twitter. In COLING, 213–223. ACL. Rajadesingan, A.; Zafarani, R.; and Liu, H. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In WSDM, 97–106. ACM. Riloff, E.; Qadir, A.; Surve, P.; De Silva, L.; Gilbert, N.; and Huang, R. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In EMNLP, 704–714. ACL. Rockwell, P., and Theriot, E. M. 2001a. Culture, gender, and gender mix in encoders of sarcasm: A selfassessment analysis. Communication Research Reports 18(1):44–52. Rockwell, P., and Theriot, E. M. 2001b. Culture, gender, and gender mix in encoders of sarcasm: A selfassessment analysis. Communication Research Reports 18(1):44–52. Schuster, M., and Paliwal, K. K. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In NIPS, 3104–3112. MIT Press. Tay, Y.; Luu, A. T.; Hui, S. C.; and Su, J. 2018. Reasoning with sarcasm by reading in-between. In ACL, 1010–1020. ACL. Tieleman, T., and Hinton, G. 2012. Lecture 6.5— RMSProp: Divide the gradient by a running average of its recent magnitude. Coursera: Neural Networks for Machine Learning. Accessed: 2019-03-01. Wallace, B. C.; Choe, D. K.; and Charniak, E. 2015. Sparse, contextually informed models for irony detection: Exploiting user communities, entities and sentiment. In ACL-IJCNLP, 1035–1044. ACL. Wilson, D. 2006. The pragmatics of verbal irony: Echo or pretence? Lingua 116(10):1722–1743.
2019
275
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2860–2871 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2860 Open Domain Event Extraction Using Neural Latent Variable Models Xiao Liu1,2 and Heyan Huang1,2 and Yue Zhang3,4∗ 1School of Computer Science and Technology, Beijing Institute of Technology 2Zhejiang Lab, China {xiaoliu,hhy63}@bit.edu.cn 3School of Engineering, Westlake University 4Institute of Advanced Technology, Westlake Institute for Advanced Study [email protected] Abstract We consider open domain event extraction, the task of extracting unconstraint types of events from news clusters. A novel latent variable neural model is constructed, which is scalable to very large corpus. A dataset is collected and manually annotated, with task-specific evaluation metrics being designed. Results show that the proposed unsupervised model gives better performance compared to the state-of-the-art method for event schema induction. 1 Introduction Extracting events from news text has received much research attention. The task typically consists of two subtasks, namely schema induction, which is to extract event templates that specify argument slots for given event types (Chambers, 2013; Cheung et al., 2013; Nguyen et al., 2015; Sha et al., 2016; Huang et al., 2016; Ahn, 2017; Yuan et al., 2018), and event extraction, which is to identify events with filled slots from a piece of news (Nguyen et al., 2016b; Sha et al., 2018; Liu et al., 2018a; Chen et al., 2018, 2015; Feng et al., 2016; Nguyen and Grishman, 2016; Liu et al., 2018b). Previous work focuses on extracting events from single news documents according to a set of pre-specified event types, such as arson, attack or earthquakes. While useful for tracking highly specific types of events from news, the above setting can be relatively less useful for decision making in security and financial markets, which can require comprehensive knowledge on broad-coverage, finegrained and dynamically-evolving event categories. In addition, given the fact that different news agencies can report the same events, redundancy can be leveraged for better event extraction. In this paper, we investigate open domain ∗Corresponding author. MUC 4 ODEE Document News Report News Report News Report News Cluster Trigger raise Agent UnitedHealth, UnitedHealth shares Patient 2018 forecast, better-than-expected profits, the insurance business Time the third quarter Variation 28% Unconstrained Types of Open Doamin Events with Their Own Schemas Trigger report Agent UnitedHealth Group, the largest U.S. health insurer Patient better-than-expected third-quarter earnings Time Tuesday Trigger predict Agent UnitedHealth Group Patient Medicare growth Type Perpetrator Instrument Target Victim Four Types of Events with Fixed Slots: Arson, Attack, Bombing and Kidnapping Figure 1: Comparison between MUC 4 and ODEE. event extraction (ODEE), which is to extract unconstraint types of events and induce universal event schemas from clusters of news reports. As shown in Figure 1, compared with traditional event extraction task exemplified by MUC 4 (Sundheim, 1992), the task of ODEE poses additional challenges to modeling, which have not been considered in traditional methods. First, more than one event can be extracted from a news cluster, where events can be flexible in having varying numbers of slots in the open domain, and slots can be flexible without identical distributions regardless of the event type, which has been assumed by previous work on schema induction. Second, mentions of the same entities from different reports in a news cluster should be taken into account for improved performance. We build an unsupervised generative model to address these challenges. While previous work on generative schema induction (Chambers, 2013; Cheung et al., 2013; Nguyen et al., 2015) relies on hand-crafted indicator features, we introduce latent variables produced by neural networks for better representation power. A novel graph model 2861 is designed, with a latent event type vector for each news cluster from a global parameterized normal distribution, and textual redundancy features for entities. Our model takes advantage of contextualized pre-trained language model (ELMo, Peters et al. (2018)) and scalable neural variational inference (Srivastava and Sutton, 2017). To evaluate model performance, we collect and annotate a large-scale dataset from Google Business News1 with diverse event types and explainable event schemas. In addition to the standard metrics for schema matching, we adapt slot coherence based on NPMI (Lau et al., 2014) for quantitatively measuring the intrinsic qualities of slots and schemas, which are inherently clusters. Results show that our neural latent variable model outperforms state-of-the-art event schema induction methods. In addition, redundancy is highly useful for improving open domain event extraction. Visualizations of learned parameters show that our model can give reasonable latent event types. To our knowledge, we are the first to use neural latent variable model for inducing event schemas and extracting events. We release our code and dataset at https://github.com/ lx865712528/ACL2019-ODEE. 2 Related Work The most popular schema induction and event extraction task setting is MUC 4, in which four event types - Arson, Attack, Bombing and Kidnapping - and four slots - Perpetrator, Instrument, Target and Victim - are defined. We compare the task settings of MUC 4 and ODEE in Figure 1. For MUC 4, the inputs are single news documents, and the output belongs to four types of events with schemas consisting of fixed slots. For ODEE, in contrast, the inputs are news clusters rather than the individual news, and the output is unconstrained types of open domain events and unique schemas with various slot combinations. Event Schema Induction seminal work studies patterns (Shinyama and Sekine, 2006; Filatova et al., 2006; Qiu et al., 2008) and event chains (Chambers and Jurafsky, 2011) for template induction. For MUC 4, the current dominant methods include probabilistic generative methods (Chambers, 2013; Cheung et al., 2013; Nguyen et al., 2015) that jointly model predicate and ar1https://news.google.com/?hl=en-US&gl= US&ceid=US:en, crawled from Oct. 2018 to Jan. 2019. gument assignment, and ad-hoc clustering algorithms for inducing slots (Sha et al., 2016; Huang et al., 2016; Ahn, 2017; Yuan et al., 2018). These methods all rely on hand-crafted discrete features without fully model the textual redundancy. There are also works on modeling event schemas and scripts using neural language models (Modi and Titov, 2014; Rudinger et al., 2015; Pichotta and Mooney, 2016), but they do not explore neural latent variables and redundancy. Event Extraction work typically assumes that event schemas are given, recognizing event triggers and their corresponding arguments. This can be regarded as a subtask of ODEE. Existing work exploits sentence-level (McClosky et al., 2011; Li et al., 2013; Liu et al., 2016; Yang and Mitchell, 2016) and document-level statistics (Liao and Grishman, 2010b; Ji and Grishman, 2008; Hong et al., 2011; Reichart and Barzilay, 2012). There has also been work using RNNs (Nguyen et al., 2016b; Sha et al., 2018; Liu et al., 2018a; Chen et al., 2018), CNNs (Chen et al., 2015; Feng et al., 2016; Nguyen and Grishman, 2016) and GCNs (Liu et al., 2018b) to represent sentences of events. Event extraction has been treated as a supervised or semi-supervised (Liao and Grishman, 2010a; Huang and Riloff, 2012) task. In contrast, ODEE is a fully unsupervised setting. Event Discovery in Tweet Streams extracts news-worthy clusters of words, segments and frames. Both supervised and unsupervised methods have been used. The former (Sakaki et al., 2010; Benson et al., 2011) are typically designed to monitor certain event types, while the latter cluster features according to their burstiness (Becker et al., 2011; Cui et al., 2012; Li et al., 2012; Ritter et al., 2012; Qin et al., 2013; Ifrim et al., 2014; McMinn and Jose, 2015; Qin et al., 2017). This line of work is similar to our work in using information redundancy, but different because we focus on formal news texts and induce structural event schemas. First Story Detection (FSD) systems aim to identify news articles that discuss events not reported before. Most work on FSD detects first stories by finding the nearest neighbors of new documents (Kumaran and Allan, 2005; Moran et al., 2016; Panagiotou et al., 2016; Vuurens and de Vries, 2016). This line of work exploits textual redundancy in massive streams predicting whether or not a document contains a new event as a clas2862 sification task. In contrast, we study the event schemas and extract detailed events. 3 Task and Data Task Definition. In ODEE, the input consists of news clusters, each containing reports about the same event. The output is a bag of open-domain events, each consisting of an event trigger and a list of event arguments in its own schema. In most cases, one event is semantically sufficient to represent the output. Formally, given an open-domain news corpus N containing a set of news clusters {c ∈N}, suppose that there are Mc news reports {di ∈ c|i = 1, · · · , Mc} in the news cluster c focusing on the same event Ec. The output is a pair (Ec, TE), where Ec is the aforementioned set of open-domain events and TE is a set of schemas that define the semantic slots for this set of events. Data Collection. We crawl news reports from Google Business News, which offers news clusters about the same events from different sources. In each news cluster, there are no more than five news reports. For each news report, we obtain the title, publish timestamp, download timestamp, source URL and full text. In total, we obtain 55,618 business news reports with 13,047 news clusters in 288 batches from Oct. 17, 2018, to Jan. 22, 2019. The crawler is executed about three times per day. The full text corpus is released as GNBusinessFull-Text. For this paper, we trim the news reports in each news cluster by keeping the title and first paragraph, releasing as GNBusiness-All. Inspired by the general slots in FrameNet (Baker et al., 1998), we design reference event schemas for open domain event types, which include eight possible slots: Agent, Patient, Time, Place, Aim, Old Value, New Value and Variation. Agent and Patient are the semantic agent and patient of the trigger, respectively; Aim is the target or reason for the event. If the event involves value changes, Old Value serves the old value, New Value serves the new value and Variation is the variation between New Value and Old Value. Note that the roles that we define are more thematic and less specific to detailed events as some of the existing event extraction datasets do (Sundheim, 1992; Nguyen et al., 2016a), because we want to make our dataset general and useful for a wide range of open domain conditions. We leave finer-grained role typing to future work. Split #C #R #S #W Test 574 2,433 5,830 96,745 Dev 106 414 991 16,839 Unlabelled 12,305 52,464 127,416 2,101,558 All 12,985 55,311 134,237 2,215,142 Full-Text 12,985 55,311 1,450,336 31,103,698 Table 1: Data split statistics. (C news clusters; R news reports; S sentences; W words.) Dataset #D #L #T #S MUC 4 1700 400 4 4 ACE 2005 599 599 33 36 ERE 562 562 38 27 ASTRE 1038 100 12 18 GNBusiness 12,985 680 – 8 Table 2: Comparison with existing datasets. (D documents or news clusters; L labeled documents or news clusters; T event types; S slots.) We randomly select 18 batches of news clusters, with 680 clusters in total, dividing them into a development set and a test set by a ratio of 1 : 5. The development set, test set and the rest unlabeled clusters are released as GNBusiness-Dev, GNBusiness-Test and GNBusiness-Unlabeled, respectively. One coauthor and an external annotator manually label the events in the news clusters as gold standards. For each news cluster, they assign each entity which participants in the event or its head word a beforehand slot. The interannotator agreement (IAA) for each slot realization in the development set has a Cohen’s kappa (Cohen, 1960) κ = 0.7. The statistics of each data split is shown in Table 1, and a comparison with existing event extraction and event schema induction datasets, including ASTRE (Nguyen et al., 2016a), MUC 4, ACE 20052 and ERE3, is shown in Table 2. Compared with the other datasets, GNBusiness has a much larger number of documents (i.e., news clusters in GNBusiness), and a comparable number of labeled documents. 4 Method We investigate three incrementally more complex neural latent variable models for ODEE. 4.1 Model 1 Our first model is shown in Figure 2(a). It can be regarded as a neural extension of Nguyen et al. 2https://catalog.ldc.upenn.edu/ LDC2006T06 3https://catalog.ldc.upenn.edu/ LDC2013E64 2863 s S h f β λ E (a) ODEE-F s h f t α θ β λ E C (b) ODEE-FE s h f t α θ β r γ λ E C (c) ODEE-FER Figure 2: Plate notations for models. (S – # of slots; E – # of entities; C – # of news clusters; V – head word vocabulary size; the grey circles are observed variables and the white circles are hidden variables.) Algorithm 1 ODEE-F 1: for each entity e ∈E do 2: Sample a slot s ∼Uniform(1, S) 3: Sample a head h ∼Multinomial(1, λs) 4: Sample a feature vector f ∼Normal(β) 5: end for (2015). Given a corpus N, we sample a slot s for each entity e from a uniform distribution of S slots, and then a head word h from a multinomial distribution, as well as a continuous feature vector f ∈Rn produced by a contextual encoder. For simplicity, we assume that f follows a multivariable normal distribution whose covariance matrix is a diagonal matrix. We mark all the parameters (mean vectors and diagonal vectors of covariance matrixes) for the S different normal distributions for f as β ∈RS×2n, where n represents the dimension of f, treating the probability matrix λ ∈RS×V in the slot-head distribution as parameters under the row-wise simplex constraint, where V is the head word vocabulary size. We call this model ODEE-F. Pre-trained contextualized embeddings such as ELMo (Peters et al., 2018), GPTs (Radford et al., 2018, 2019) and BERT (Devlin et al., 2018) give improvements on a range of natural language processing tasks by offering rich language model information. We choose ELMo4 as our contextual feature encoder, which manipulates unknown words by using character representations. The generative story is shown in Algorithm 1. The joint probability of an entity e is pλ,β(e) = p(s) × pλ(h|s) × pβ(f|s) (1) 4In practice, we use the “small” ELMo model with 2 × 128-d output in https://allennlp.org/elmo as initial parameters and fine-tune it on GNBusiness-Full-Text. Algorithm 2 ODEE-FE 1: for each news cluster c ∈N do 2: Sample a latent event type vector t ∼Normal(α) 3: for each entity e ∈Ec do 4: Sample a slot s ∼Multinomial(MLP(t; θ)) 5: Sample a head h ∼Multinomial(1, λs) 6: Sample a feature vector f ∼Normal(βs) 7: end for 8: end for 4.2 Model 2 A limitation of ODEE-F is that sampling slot assignment s from a global uniform distribution does not sufficiently model the fact that different events may have different slot distributions. Thus, in Figure 2(b), we further sample a latent event type vector t ∈Rn for each news cluster from a global normal distribution parameterized by α. We then use t and a multi-layer perceptron (MLP) with parameters θ to encode the corresponding slot distribution logits, sampling a discrete slot assignment s ∼Multinomial(MLP(t; θ)). The output of the MLP is passed through a softmax layer before being used. We name this model as ODEE-FE. The generative story is shown in Algorithm 2. The joint probability of a news cluster c is pα,β,θ,λ(c) = pα(t) × Y e∈Ec pθ(s|t) × pλ(h|s) × pβ(f|s) (2) 4.3 Model 3 Intuitively, the more frequently a coreferential entity shows up in a news cluster, the more likely it is with an important slot. Beyond that, different news agencies focus on different aspects of event arguments, which can offer complementary information through textual redundancy. One intu2864 Algorithm 3 ODEE-FER 1: for each news cluster c ∈N do 2: Sample a latent event type vector t ∼Normal(α) 3: for each entity e ∈Ec do 4: Sample a slot s ∼Multinomial(MLP(t; θ)) 5: Sample a head h ∼Multinomial(1, λs) 6: Sample a feature vector f ∼Normal(βs) 7: Sample a redundancy ratio r ∼Normal(γs) 8: end for 9: end for ition is that occurrence frequency is a straightforward measure for word-level redundancy. Thus, in Figure 2(c), we additionally bring in the normalized occurrence frequency of a coreferential slot realization as an observed latent variable r ∼ Normal(γs). We call this model ODEE-FER. Formally, a news cluster c receives a latent event type vector t where each entity e ∈Ec receives a slot type s. The generative story is shown in Algorithm 3. The joint distribution of a news cluster with head words, redundant contextual features and latent event type is pα,β,γ,θ,λ(c) = pα(t) × Y e∈Ec pθ(s|t) × pλ(h|s) × pβ(f|s) × pγ(r|s) (3) 4.4 Inference We now consider two tasks for ODEE-FER: (1) learning the parameters and (2) performing inference to obtain the posterior distribution of the latent variables s and t, given a news cluster c. We adapt the amortized variational inference method of Srivastava and Sutton (2017), using neural inference network to learn the variational parameters. For simplicity, we concatenate f with r as a new observed feature vector f′ in ODEE-FER and merge their parameters as β′ ∈RS×(2n+2). Following Srivastava and Sutton (2017), we collapse the discrete latent variable s to obtain an Evidence Lower BOund (ELBO) (Kingma and Welling, 2014) of the log marginal likelihood: log pα,β′,θ,λ(c) = log Z t [ Y e∈Ec pλ,θ(h|t) pβ′,θ(f′|t)] pα(t) dt ≥ELBOc(α, β′, θ, λ, ω) = Eqω(t)log pβ′,θ,λ(c|t) −DKL[qω(t)∥pα(t)] (4) where DKL[qω∥pα] is the KL divergence between the variational posterior qω and the prior pα. Due BatchNorm & Exp BatchNorm Fully Connected Average Pooling Dropout Dropout Concatenate Head Word Histograms: ℎ Contextual Features: "’ Mean Vector: $ Variance Vector: %& SoftPlus SoftPlus Fully Connected Fully Connected Fully Connected Figure 3: The framework of our inference network. to the difficulty in computing the KL divergence between different categories of distributions and the existence of simple and effective reparameterization tricks for normal distributions, we choose qω(t) to be a normal distribution parameterized by ω, which is learned by a neural inference network. As shown in Figure 3, our inference network takes the head word histograms h (the times of each head word appears in a news cluster) and contextual features f′ as inputs, and computes the mean vector µ and the variance vector σ2 of qω(t). Equation 4 can be solved by obtaining a Monte Carlo sample and applying reparameterization tricks for the first term, and using the closedform for the KL divergence term. We then use the ADAM optimizer (Kingma and Ba, 2014) to maximumize the ELBO. In addition, to alleviate the component collapsing problem (Dinh and Dumoulin, 2016), we follow Srivastava and Sutton (2017) and use high moment weight (> 0.8) and learning rate (in [0.001, 0.1]) in the ADAM optimizer, performing batch normalization (Ioffe and Szegedy, 2015) and dropout (Srivastava et al., 2014). After learning the model, we make slot assignment for each entity mention by MLE, choosing the slot s that maximizes the likelihood pβ′,θ,λ(s|e, t) ∝pβ′,θ,λ(s, h, f′, t) = pθ(s|t) × pλ(h|s) × pβ′(f′|s) (5) 2865 Name Value Slots number S 30 Feature Dimension n 256 Fully connected layer size 100 MLP layer number 1 Activation function softplus Learning rate 0.002 Momentum 0.99 Dropout rate 0.2 Batch size 200 Table 3: Hyper-parameters setting. 4.5 Assembling Events for Output To assemble the events in a news cluster c for final output, we need to find the predicate for each entity, which now has a slot value. We use POStags and parse trees produced by the Stanford dependency parser (Klein and Manning, 2003) to extract the predicate for the head word of each entity mention. The following rules are applied: (1) if the governor of a head word is VB, or (2) if the governor of a head word is NN and belongs to the noun.ACT or noun.EVENT category of WordNet, then it is regarded as a predicate. We merge the predicates of entity mentions in the same coreference chain as a predicate set. For each predicate v in these sets, we find the entities whose predicate set contains v, treating the entities as arguments of the event triggered by v. Finally, by ranking the numbers of arguments, we obtain top-N open-domain events as the output Ec. 5 Experiments We verify the effectiveness of neural latent variable modeling and redundancy information for ODEE, and conduct case analysis. All our experiments are conducted on the GNBusiness dataset. Note that we do not compare our models and existing work on MUC 4 or ACE 2005 due to the fact that these datasets do not consist of news clusters. Settings. The hyper-parameters in our models and inference network are shown in Table 3. Most of the hyper-parameters directly follow Srivastava and Sutton (2017), while the slot number S is chosen according to development experiments. 5.1 Evaluation Metrics Schemas Matching. We follow previous work and use precision, recall and F1-score as the metrics for schema matching (Chambers and Jurafsky, 2011; Chambers, 2013; Cheung et al., 2013; Nguyen et al., 2015; Sha et al., 2016; Ahn, 2017). The matching between model answers and references is based on the head word. Following previous work, we regard as the head word the rightmost word of an entity phrase or the right-most word before the first “of”, “that”, “which” and “by” if any. In addition, we also perform slot mapping, between slots that our model learns and slots in the annotation. Following previous work on MUC 4 (Chambers, 2013; Cheung et al., 2013; Nguyen et al., 2015; Sha et al., 2016; Ahn, 2017), we implement automatic greedy slot mapping. Each reference slot is mapped to a learned slot that ranks the best according to the F1-score metric on GNBusiness-Dev. Slot Coherence. Several metrics of qualitative topic coherence evaluation have been proposed. Lau et al. (2014) showed that normalized pointwise mutual information (NPMI) between all the pairs of words in a set of topics the most closely matches human judgment among all the competing metrics. We thus adopt it as slot coherence5. Formally, the slot coherence CNPMI(s) of a slot s is calculated by using its top-N head words as CNPMI(s) = 2 N2 −N N X i=2 i−1 X j=1 NPMI(wi, wj) (6) NPMI(wi, wj) = log p(wi,wj)+ϵ p(wi)·p(wj) −log(p(wi, wj) + ϵ) (7) where p(wj) and p(wi, wj) are estimated based on word co-occurrence counts derived within a sliding window over external reference documents and ϵ is added to avoid zero logarithm. Previous work on topic coherence uses Wikipedia and Gigaword as the reference corpus to calculate word frequencies (Newman et al., 2010; Lau et al., 2014). We use GNBusinessFull-Text, in which there are 1.45M sentences and 31M words, which is sufficient for estimating the probabilities. To reduce sparsity, for each news report, we count word co-occurrences in the whole document instead of a sliding window. In addition, for each slot, we keep the top-5, top-10, top-20, and top-100 head words, averaging the 4 × S coherence results over a test set. 5We use the implementation in https://github. com/jhlau/topic_interpretability. 2866 10 15 20 25 30 35 40 45 50 number of slots S 10 20 30 40 50 Performance (%) Nguyen et al. (2015) F1 Nguyen et al. (2015) CNPMI Clustering F1 ODEE-F F1 ODEE-F CNPMI ODEE-FE F1 ODEE-FE CNPMI ODEE-FER F1 ODEE-FER CNPMI Figure 4: F1 scores of schemas matching and averaged slot coherences CNPMI of the five models with different numbers of slots S. Method Schema Matching (%) P R F1 Nguyen et al. (2015) 41.5 53.4 46.7 Clustering 41.2 50.6 45.4 ODEE-F 41.7 53.2 46.8 ODEE-FE 42.4 56.1 48.3 ODEE-FER 43.4 58.3 49.8 Table 4: Overall performance of schema matching. 5.2 Development Experiments We learn the models on GNBusiness-All and use GNBusiness-Dev to determine the slot number S by grid search in [10, 50] with the step equals to 5. Figure 4 shows the F1 scores of schemas matching and averaged slot coherences of the five models we introduce in the next subsection with different numbers of slots S ranging from 10 to 50. We can see that for the best F1 score of ODEE-FER, the optimal number of slots is 30, while for the best slot coherence, the optimal number of slots is 25. A value of S larger than 30 or smaller than 25 gives lower results on both F1 score and slot coherence. Considering the balance between F1 score and slot coherence, we chose S = 30 as our final S value for the remaining experiments. 5.3 Final Results Table 4 and Table 5 show the final results. The p values based on the appropriate t-test are proMethod Ave Slot Coherence Nguyen et al. (2015) 0.10 ODEE-F 0.10 ODEE-FE 0.16 ODEE-FER 0.18 Table 5: Averaged slot coherence results. vided below in cases where the compared values are close. We compare our work with Nguyen et al. (2015), the state-of-the-art model on MUC 4 representing each entity as a triple containing a head word, a list of attribute relation features and a list of predicate relation features. Features in the model are discrete and extracted from dependency parse trees. The model structure is identical to our ODEE-F except for the features. To test the strengths of our external features in isolation, we build another baseline model by taking the continuous features of each entity in ODEE-F and runing spectral clustering (von Luxburg, 2007). We call it Clustering. Schemas Matching. Table 4 shows the overall performance of schema matching on GNBusinessTest. From the table, we can see that ODEE-FER achieves the best F1 scores among all the methods. By comparing Nguyen et al. (2015) and ODEEF (p = 0.01), we can see that using continuous contextual features gives better performance than discrete features. This demonstrates the advantages of continuous contextual features for alleviating the sparsity of discrete features in texts. We can also see from the result of Clustering that using only the contextual features is not sufficient for ODEE, while combining with our neural latent variable model in ODEE-F can achieve strong results (p = 6×10−6). This shows that the neural latent variable model can better explain the observed data. These results demonstrate the effectivenesses of our method in incorporating with contextual features, latent event types and redundancy information. Among ODEE models, ODEE-FE gives a 2% gain in F1 score against ODEE-F, which shows that the latent event type modeling is beneficial and the slot distribution relies on the latent event type. Additionally, there is a 1% gain in F1 score by comparing ODEE-FER and ODEE-FE (p = 2 × 10−6), which confirms that leveraging redundancy is also beneficial in exploring which slot an entity should be assigned. Slot Coherence. Table 5 shows the comparison of averaged slot coherence results over all the slots in the schemas. Note that we do not report the slot coherence for the Clustering model because it does not output the top-N head words in each slot. The averaged slot coherence of ODEE-FER is the highest, which is consistent with the conclusion from Table 4. The averaged slot coherence 2867 Boston Dynamics' reveals its robodog Spot dancing Arby's will debut sous vide duck sandwich Prime Deli Corporation Recalls Salads Massive recall issued for frozen beef, chicken taquitos Wendy's Offering $1 Any Size Fry For A Limited Time Netflix shares surge IBM drops 4.3% aftermarket Walmart lowered its profit targets UnitedHealth shares rise Intel shares gain Figure 5: T-SNE visualization results of the latent event type vectors in the test set with colored labels produced by spectral clustering. of ODEE-F is comparable to that of Nguyen et al. (2015) (p = 0.3415), which again demonstrates that the contextual features are a strong alternative to discrete features. The scores of ODEE-FE (p = 0.06) and ODEE-FER (p = 10−5) are both higher than that of ODEE-F, which proves that the latent event type is critical in ODEE. 5.4 Latent Event Type Analysis We are interested in learning how well the latent event type vectors can be modeled. To this end, for each news cluster in GNBusiness-Dev, we use our inference network in Figure 3 to calculate the mean µ for the latent event type vector t. T-SNE transformation (Maaten and Hinton, 2008) of the mean vectors are shown in Figure 5. Spectral clustering is further applied, and the number of clusters is chosen by the Calinski-Harabasz Score (Cali´nski and Harabasz, 1974) in grid search. In Figure 5, there are four main clusters marked in different colors. Representative titles of news reports are shown as examples. We find that the vectors show salient themes for each main cluster. For example, the red cluster contains news reports about rise and drop of stocks such as Netflix shares surge, IBM drops, Intel shares gain, etc; the news reports in the purple cluster are mostly about product related activities, such as Boston Dynamics’ reveals its robodog Spot dancing, Arby’s will debut sous vide duck sandwich, Wendy’s Offering $1 Any Size Fry, etc. The green cluster and the DOC 1 2018-10-16 07:00:03 UnitedHealth shares rise after posting a 28% rise in third-quarter profit, raises 2018 forecast UnitedHealth, the largest U.S. health insurer, reported betterthan-expected third-quarter earnings and revenue on Tuesday. DOC 2 2018-10-16 00:00:00 UnitedHealth's 2018 so far: Three quarters, three boosts to outlook DOC 3 2018-10-17 00:32:09 UnitedHealth Group predicts Medicare growth The comments came as the insurer beat profit expectations for Q3. DOC 4 2018-10-16 10:53:06 UnitedHealth beats all around in 3Q, raises outlook again MINNEAPOLIS (AP) — UnitedHealth reported betterthan-expected profits and revenue for the third quarter and the company raised its outlook yet again on strong trends in the insurance business. Trigger raise Agent UnitedHealth, UnitedHealth shares Patient 2018 forecast, better-than-expected profits, the insurance business Time the third quarter Variation 28% Event 1 Trigger report Agent UnitedHealth Group, the largest U.S. health insurer Patient better-than-expected third-quarter earnings Time Tuesday Trigger predict Agent UnitedHealth Group Patient Medicare growth Event 2 Event 3 Figure 6: Extracted open domain events for UnitedHealth shares rise. orange cluster are also interpretable. The former is about organization reporting changes, while the latter is about service related activities. 5.5 Case Study We further use the news cluster UnitedHealth shares rise in Figure 5 for case study. Figure 6 shows the top-3 open-domain events extracted from the news cluster, where four input news reports are shown on the left and three systemgenerated events are shown on the right with mapped slots. By comparing the plain news reports and the extracted events, we can see that the output events give a reasonable summary for the news cluster with three events triggered by “raise”, “report” and “predict”, respectively. Most of the slots are meaningful and closely related to the trigger, while covering most key aspects. However, this example also contains several incorrect slots. In the event 1, the slot “Variation” and its realization “28%” are only related to the entity “better-than-expected profits”, but there are three slot realizations in the event, which causes confusion. In addition, the slot “Aim” does not appear in the first event, whose realization should be “third-quarter profit” in document 1. The reason may be that we assemble an event only using entities with the same predicate, which introduces noise. Besides, due to 2868 the preprocessing errors in resolving coreference chains, some entity mentions are missing from the output. There are also cases where one slot realization is semantically related to one trigger but eventually appears in a different event. One example is the entity “better-than-expected profits”, which is related to the predicate word “report” but finally appears in the “raise” event. The cause can be errors propagated from parsing dependency trees, which confuse the syntactic predicate of the head word of an entity. 6 Conclusion We presented the task of open domain event extraction, extracting unconstraint types of events from news clusters. A novel latent variable neural model was investigated, which explores latent event type vectors and entity mention redundancy. In addition, GNBusiness dataset, a largescale dataset annotated with diverse event types and explainable event schemas, is released along with this paper. To our knowledge, we are the first to use neural latent variable model for inducing event schemas and extracting events. Acknowledgments We thank the anonymous reviewers for their valuable comments and suggestions. We thank KiemHieu Nguyen from Hanoi University of Science and Technology for providing source code and solving confusions for their work. We thank Katherine Keith from University of Massachusetts at Amherst for sharing valuable experiences on probabilistic models. This work is supported by National Natural Science Foundation of China No. 61751201, National Key Research and Development Plan No. 2016QY03D0602, Research Foundation of Beijing Municipal Science and Technology Commission No. Z181100008918002, the funding from Rxhui Inc6 and China Scholarship Council No. 201806030142. The work is done when Xiao Liu is visiting Yue Zhang. References Natalie Ahn. 2017. Inducing event types and roles in reverse: Using function to discover theme. In Proceedings of the Events and Stories in the News Workshop@ACL 2017, pages 66–76. 6https://rxhui.com Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 86–90. Hila Becker, Mor Naaman, and Luis Gravano. 2011. Beyond trending topics: Real-world event identification on twitter. In Proceedings of the 5th International Conference on Weblogs and Social Media, pages 438–441. Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 389– 398. Tadeusz Cali´nski and Jerzy Harabasz. 1974. A dendrite method for cluster analysis. Communications in Statistics-theory and Methods, 3(1):1–27. Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1797– 1807. Nathanael Chambers and Dan Jurafsky. 2011. Template-based information extraction without the templates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 976–986. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 167–176. Yubo Chen, Hang Yang, Kang Liu, Jun Zhao, and Yantao Jia. 2018. Collective event detection via a hierarchical and bias tagging networks with gated multilevel attention mechanisms. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1267–1276. Jackie Chi Kit Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 837–846. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37–46. Anqi Cui, Min Zhang, Yiqun Liu, Shaoping Ma, and Kuo Zhang. 2012. Discover breaking events with popular hashtags in twitter. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pages 1794–1798. 2869 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Laurent Dinh and Vincent Dumoulin. 2016. Training neural bayesian nets. Technical report. Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A languageindependent neural network for event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 66–71. Elena Filatova, Vasileios Hatzivassiloglou, and Kathleen R. McKeown. 2006. Automatic creation of domain templates. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics and the 21st International Conference on Computational Linguistics, pages 207–214. Yu Hong, Jianfeng Zhang, Bin Ma, Jian-Min Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In roceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1127–1136. Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R. Voss, Jiawei Han, and Avirup Sil. 2016. Liberal event extraction and event schema induction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 258– 268. Ruihong Huang and Ellen Riloff. 2012. Bootstrapped training of event extraction classifiers. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 286–295. Georgiana Ifrim, Bichen Shi, and Igor Brigadir. 2014. Event detection in twitter using aggressive filtering and hierarchical tweet clustering. In Proceedings of the SNOW 2014 Data Challenge co-located with 23rd International World Wide Web Conference, pages 33–40. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, pages 448–456. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 254–262. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of the 2014 International Conference on Learning Representations. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430. Giridhar Kumaran and James Allan. 2005. Using names and topics for new event detection. In Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing, pages 121– 128. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539. Chenliang Li, Aixin Sun, and Anwitaman Datta. 2012. Twevent: segment-based event detection from tweets. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pages 155–164. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 73–82. Shasha Liao and Ralph Grishman. 2010a. Filtered ranking for bootstrapping in event extraction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 680–688. Shasha Liao and Ralph Grishman. 2010b. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 789–797. Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2018a. Event detection via gated multilingual attention mechanism. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 4865–4872. Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging framenet to improve automatic event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2134–2143. Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018b. Jointly multiple events extraction via attentionbased graph information aggregation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1247–1256. Ulrike von Luxburg. 2007. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. 2870 David McClosky, Mihai Surdeanu, and Christopher D. Manning. 2011. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1626–1635. Andrew James McMinn and Joemon M. Jose. 2015. Real-time entity-based event detection for twitter. In Proceedings of the Experimental IR Meets Multilinguality, Multimodality, and Interaction - 6th International Conference of the CLEF Association, pages 65–77. Ashutosh Modi and Ivan Titov. 2014. Inducing neural models of script knowledge. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 49–57. Sean Moran, Richard McCreadie, Craig Macdonald, and Iadh Ounis. 2016. Enhancing first story detection using word embeddings. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 821–824. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Proceedings of the 2010 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 100–108. Kiem-Hieu Nguyen, Xavier Tannier, Olivier Ferret, and Romaric Besanc¸on. 2015. Generative event schema induction with entity disambiguation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 188–197. Kiem-Hieu Nguyen, Xavier Tannier, Olivier Ferret, and Romaric Besanc¸on. 2016a. A dataset for open event extraction in english. In Proceedings of the 10th International Conference on Language Resources and Evaluation, pages 1939–1943. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016b. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309. Thien Huu Nguyen and Ralph Grishman. 2016. Modeling skip-grams for event detection with convolutional neural networks. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 886–891. Nikolaos Panagiotou, Cem Akkaya, Kostas Tsioutsiouliklis, Vana Kalogeraki, and Dimitrios Gunopulos. 2016. First story detection using entities and relations. In Proceedings of the 26th International Conference on Computational Linguistics, pages 3237–3244. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237. Karl Pichotta and Raymond J. Mooney. 2016. Using sentence-level LSTM language models for script inference. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Yanxia Qin, Yue Zhang, Min Zhang, and Dequan Zheng. 2013. Feature-rich segment-based news event detection on twitter. In Proceedings of the 6th International Joint Conference on Natural Language Processing, pages 302–310. Yanxia Qin, Yue Zhang, Min Zhang, and Dequan Zheng. 2017. Semantic-frame representation for event detection on twitter. In Proceedings of the 2017 International Conference on Asian Language Processing, pages 264–267. Long Qiu, Min-Yen Kan, and Tat-Seng Chua. 2008. Modeling context in scenario template creation. In Proceedings of the 3rd International Joint Conference on Natural Language Processing, pages 157– 164. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Roi Reichart and Regina Barzilay. 2012. Multi-event extraction guided by global constraints. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 70– 79. Alan Ritter, Mausam, Oren Etzioni, and Sam Clark. 2012. Open domain event extraction from twitter. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1104–1112. Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, and Benjamin Van Durme. 2015. Script induction as language modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1681–1686. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes twitter users: real-time event detection by social sensors. In Proceedings of the 19th International Conference on World Wide Web, pages 851–860. 2871 Lei Sha, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. Joint learning templates and slots for event schema induction. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 428–434. Lei Sha, Feng Qian, Baobao Chang, and Zhifang Sui. 2018. Jointly extracting event triggers and arguments by dependency-bridge RNN and tensor-based argument interaction. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 5916–5923. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proceedings of the 2006 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 304–311. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In Proceedings of the 2017 International Conference on Learning Representations. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Beth Sundheim. 1992. Overview of the fourth message understanding evaluation and conference. In Proceedings of the 4th Conference on Message Understanding, pages 3–21. Jeroen B. P. Vuurens and Arjen P. de Vries. 2016. First story detection using multiple nearest neighbors. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 845–848. Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289–299. Quan Yuan, Xiang Ren, Wenqi He, Chao Zhang, Xinhe Geng, Lifu Huang, Heng Ji, Chin-Yew Lin, and Jiawei Han. 2018. Open-schema event profiling for massive news corpora. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 587–596.
2019
276
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2872–2881 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2872 Multi-Level Matching and Aggregation Network for Few-Shot Relation Classification Zhi-Xiu Ye, Zhen-Hua Ling∗ National Engineering Laboratory for Speech and Language Information Processing, University of Science and Technology of China [email protected], [email protected] Abstract This paper presents a multi-level matching and aggregation network (MLMAN) for few-shot relation classification. Previous studies on this topic adopt prototypical networks, which calculate the embedding vector of a query instance and the prototype vector of each support set independently. In contrast, our proposed MLMAN model encodes the query instance and each support set in an interactive way by considering their matching information at both local and instance levels. The final class prototype for each support set is obtained by attentive aggregation over the representations of its support instances, where the weights are calculated using the query instance. Experimental results demonstrate the effectiveness of our proposed methods, which achieve a new state-of-the-art performance on the FewRel dataset1. 1 Introduction Relation classification (RC) is a fundamental task in natural language processing (NLP), which aims to identify the semantic relation between two entities in text. For example, the instance “[London]e1 is the capital of [the UK]e2” expresses the relation capital of between the two entities London and the UK. Some conventional relation classification methods (Bethard and Martin, 2007; Zelenko et al., 2002) adopted supervised training and suffered from the lack of large-scale manually labeled data. To address this issue, the distant supervision method (Mintz et al., 2009) was proposed which annotated training data by heuristically aligning knowledge bases (KBs) and texts. However, the long-tail problem in KBs (Xiong et al., 2018; Han ∗Corresponding author: Zhen-Hua Ling. 1The code is available at https://github.com/ ZhixiuYe/MLMAN. Support Set class A: mother instance #1 The Queen Consort [Jetsun Pema]e2 gave birth to a son on 5 February 2016 , [Jigme Namgyel Wangchuck]e1. instance #2 He married the American actress [Cindy Robbins]e2 and was stepfather to her daughter , [Kimberly Beck]e1. instance #3 Edgar married actress [Moyna Macgill]e2 and became the father of [Angela Lansbury]e1. instance #4 In 1845 , [Cemile Sultan]e1 ’s mother , Empress [Dzdidil Kadn]e2, died. instance #5 Bo ’s wife [Gu Kailai]e2 traveled with their son [Bo Guagua]e1 to Britain. class B: member of ... class C: father ... class D: sport ... class E: voice type ... Query Instance He was married to [Eva Funck]e2 and they have a son [Gustav]e1 . Table 1: A data example of 5-way-5-shot relation classification in FewRel development set. The correct relation class for the query instance is class A: mother. The instances for other relation classes are omitted for saving space. et al., 2018) still exists and makes it hard to classify the relations with very few training samples. This paper focuses on the few-shot relation classification task, which was designed to address the long-tail problem. In this task, only few (e.g., 1 or 5) support instances are given for each relation, as shown by an example in Table 1. The few-shot learning problem has been studied extensively in computer vision (CV) field. Some methods adopt meta-learning architectures (Santoro et al., 2016; Ravi and Larochelle, 2016; Finn et al., 2017; Munkhdalai and Yu, 2017), which learn fast-learning abilities from previous experiences (e.g., training set) and then rapidly gen2873 eralize to new concepts (e.g., test set). Some other methods use metric learning based networks (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017), which learn the distance distributions among classes. A simple and effective metricbased few-shot learning method is prototypical network (Snell et al., 2017). In a prototype network, query and support instances are encoded into an embedding space independently. Then, a prototype vector for each class candidate is derived as the mean of its support instances in the embedding space. Finally, classification is performed by calculating the distances between the embedding vector of the query and all class prototypes. This prototype network method has also been applied to few-shot relation classification recently (Han et al., 2018). This paper proposes a multi-level matching and aggregation network (MLMAN) for few-shot relation classification. Different from prototypical networks, which represent support sets without dependency on query instances, our proposed MLMAN model encodes each query instance and each support set in an interactive way by considering their matching information at both local and instance levels. At local level, the local context representations of a query instance and a support set are softly matched toward each other following the sentence matching framework (Chen et al., 2017). Then, the matched local representations are aggregated into an embedding vector for each query and each support instance using max and average pooling. At instance level, the matching degree between the query instance and each of the support instances is calculated via a multi-layer perceptron (MLP). Taking the matching degrees as weights, the instances in a support set are aggregated to form the class prototype for final classification. All these matching and aggregation layers in the MLMAN model are estimated jointly using training data. Since the representations of the support instances in each class are expected to be close with each other, an auxiliary loss function is further designed to measure the inconsistency among all support representations in each class. In summary, our contributions in this paper are three-fold. First, a multi-level matching and aggregation network is proposed to encode query instances and class prototypes in an interactive fashion. Second, an auxiliary loss function measuring the consistency among support instances is designed. Third, our method achieves a new state-ofthe-art performance on FewRel, a public few-shot relation classification dataset. 2 Related Work 2.1 Relation Classification Relation classification is to identify the semantic relation between two entities in one sentence. In recently years, neural networks have been widely applied to deal with this task. Zeng et al. (2014) employed position features and convolutional neural networks (CNNs) to capture the structure and contextual information respectively. Then, a max pooling operation was adopted to determine the most useful features. Wang et al. (2016) proposed multi-level attention CNNs, which captured both entity-specific attention and relation-specific pooling attention in order to better discern patterns in heterogeneous contexts. Zhou et al. (2016) proposed attention-based bidirectional long shortterm memory networks (AttBLSTMs) to capture the most important semantic information in a sentence. All of these methods require a large amount of training data and can’t quickly adapt to a new class that has never been seen. 2.2 Metric Based Few-Shot Learning In few-shot learning paradigm, a classifier is required to generalize to new classes with only a small number of training samples. The metric based approach aims to learn a set of projection functions that take support and query samples from the target problem and classify them in a feed forward manner. This approach has lower complexity and is easier for implementation than meta-learner based approach (Ravi and Larochelle, 2016; Finn et al., 2017; Santoro et al., 2016; Munkhdalai and Yu, 2017). Some metric based few-shot learning methods have been developed for computer vision (CV) tasks, and all these methods encoded each support or query image to a vector independently for classification. Koch et al. (2015) proposed a method for learning siamese neural networks, which employed an unique structure to encode both support and query samples respectively and one more layer computing the induced distance metric between the pair. Vinyals et al. (2016) proposed to learn a matching network augmented with attention and external memories. And also, an episodebased training procedure was proposed, which was 2874 based on a principle that test and training conditions must match and has been adopted by many following studies. Snell et al. (2017) proposed prototypical networks that learn a metric space in which classification can be performed by computing distances to prototype representations of all classes, and the prototype representation of each class was the mean of all its support samples. Garcia and Bruna (2017) defined a graph neural network architecture to assimilate generic messagepassing inference algorithms, which generalized above three models. Regarding with few-shot relation classification, Han et al. (2018) adopted prototypical networks to build baseline models on the FewRel dataset. Gao et al. (2019) proposed hybrid attention-based prototypical networks to handle noisy training samples in few-shot learning. In this paper, we improve the conventional prototypical networks for few-shot relation classification by encoding the query instance and class prototype interactively through multi-level matching and aggregation. 2.3 Sentence Matching Sentence matching is essential for many NLP tasks, such as natural language inference (NLI) (Bowman et al., 2015) and response selection (Lowe et al., 2015). Some sentence matching methods mainly rely on sentence encoding (Mueller and Thyagarajan, 2016; Conneau et al., 2017; Chen et al., 2018), which encode a pair sentences independently and then transmit their embeddings into a classifier, such as a neural network, to decide the relationship between them. Some other methods are based on joint models (Chen et al., 2017; Gong et al., 2017; Kim et al., 2018), which use cross-features to represent the local (i.e., word-level and phrase-level) alignments for better performance. In this paper, we follow the joint models to achieve the local matching between a query instance and the support set for a class. The difference between our task and the other sentence matching tasks mentioned above is that, our goal is to match a sentence to a set of sentences, instead of to another sentence (Bowman et al., 2015) or to a sequence of sentences (Lowe et al., 2015). 3 Task Definition In few-shot relation classification, we are given two datasets, Dmeta−train and Dmeta−test. Each dataset consists of a set of samples (x, p, r), where x is a sentence composed of T words and the tth word is wt, p = (p1, p2) indicate the positions of two entities, and r is the relation label of the instance (x, p). These two datasets have their own relation label spaces that are disjoint with each other. Under few-shot configuration, Dmeta−test is splited into two parts, Dtest−support and Dtest−query. If Dtest−support contains K labeled samples for each of N relation classes, this target few-shot problem is named N-way-K-shot. Dtest−query contains test samples, each labeled with one of the N classes. Assuming that we only have Dtest−support and Dtest−query, we can train a model using Dtest−support and evaluate its performance on Dtest−query. But limited by the number of support samples (i.e,., N ×K), it is hard to train a good model from scratch. Although Dmeta−train and Dmeta−test have disjoint relation label spaces, Dmeta−train can also been utilized to help the few-shot relation classification on Dmeta−test. One approach is the paradigm proposed by Vinyals et al. (2016), which obey an important machine learning principle that test and train conditions must match. That’s to say, we also split Dmeta−train into two parts, Dtrain−support and Dtrain−query, and mimic the few-shot learning settings at training stage. In each training iteration, N classes are randomly selected from Dtrain−support, and K support instances are randomly selected from each class. In this way, we construct the train-support set S = {si k; i = 1, ..., N, k = 1, ..., K}, where si k is the k-th instance in class i. And also, we randomly select R samples from the remaining samples of those N classes and construct the trainquery set Q = {(qj, lj); j = 1, ..., R}, where lj ∈{1, ..., N} is the label of instance qj. Just like conventional prototypical networks, we expect to minimize the following objective function at training time Jmatch = −1 R X (q,l)∈Q P(l|S, q), (1) and P(l|S, q) is defined as P(l|S, q) = exp(f({sl k}K k=1, q)) PN i=1 exp(f({si k}K k=1, q)) . (2) The function f({si k}K k=1, q) is to calculate the matching degree between the query instance q and 2875 𝑠1 Encoder 𝑠𝐾 Encoder ... ... ... 𝐶 𝑞 Encoder 𝑄 Local Matching and Aggregation 𝑠 1 ... 𝑠 𝐾 ... Class Matching 𝛾 𝑠2 Encoder 𝑆1 𝑆2 𝑆𝐾 𝑞 Instance Matching and Aggregation 𝑠 Figure 1: The framework of our proposed MLMAN model. the set of support instances {si k}K k=1. How to design this function is the focus of this paper. 4 Methodology In this section, we will introduce our proposed multi-level matching and aggregation network (MLMAN) for modeling f({si k}K k=1, q). For simplicity, we will discard the superscript i of si k from Section 4.1 to Section 4.4. The framework of our proposed MLMAN model is shown in Fig. 1, which has four main modules. • Context Encoder. Given a sentence and the positions of two entities within this sentence, CNNs (Zeng et al., 2014) are adopted to derive the local context representations of each word in the sentence. • Local Matching and Aggregation. Similar to (Chen et al., 2017), given the local representation of a query instance and the local representations of K support instances, the attention method is employed to collect local matching information between them. Then, the matched local representations are aggregated to represent each instance as an embedding vector. • Instance Matching and Aggregation. The matching information between a query instance and each of the K support instances are calculated using an MLP. Then, we take the matching degrees as weights to sum the representations of support instances in order to get the class prototype. • Class Matching. An MLP is built to calculate the matching score between the representations of the query instance and the class prototype. More details of these four modules will be introduced in the following subsections. 4.1 Context Encoder For a query or support instance, each word wt in the sentence x is first mapped into a dw-dimensional word embedding et (Pennington et al., 2014). In order to describe the position information of the two entities in this instance, the position features (PFs) proposed by Zeng et al. (2014) are also adopted in our work. Here, PFs describe the relative distances between current word and the two entities, and are further mapped into two vectors p1t and p2t of dp dimensions. Finally, these three vectors are concatenated to get the word representation wt = [et; p1t; p2t] of dw+2dp dimensions, and the instance can be written as W ∈RT×(dw+2dp). The most popular models for local context encoding are recurrent neural networks (RNNs) with long short-term memories (LSTMs) (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNNs) (Kim, 2014). In this paper, we employ CNNs to build the context encoder. For an input instance W ∈RT×(dw+2dp), we input it into a CNN with dc filters. The output from the CNN is a matrix with T × dc dimensions. In this way, the context representations of the query instance Q ∈RTq×dc and the context representations of support instances {Sk ∈RTk×dc; k = 1, ..., K} are obtained, where Tq and Tk are the sentence lengths of the query sentence and the k-th support sentence respectively. 4.2 Local Matching and Aggregation In order to get the matching information between Q and {Sk; k = 1, ..., K}, we first concatenate the K support instance representations into one matrix as follow C = concat({Sk}K k=1), (3) where C ∈RTs×dc with Ts = PK k=1 Tk. Then, we collect the matching information between Q 2876 and C and calculate their matched representations eQ and eS as follows αmn = q⊤ mcn, (4) eqm = Ts X n=1 exp(αmn) PTs n′=1 exp(αmn′) cn, (5) ecn = Tq X m=1 exp(αmn) PTq m′=1 exp(αm′n) qm, (6) where m ∈{1, ..., Tq} in Eq. (5), n ∈{1, ..., Ts} in Eq. (6), qm and eqm are the m-th rows of Q and eQ respectively, and cn and ecn are the n-th rows of C and eC respectively. Next, the original representations and the matched representations are fused utilizing a ReLU layer as follows, ¯Q = ReLU([Q; eQ; |Q −eQ|; Q ⊙eQ]W1), (7) ¯C = ReLU([C; eC; |C −eC|; C ⊙eC]W1), (8) where ⊙is the element-wise product and W1 ∈ R4dc×dh is the weight matrix at this layer for reducing dimensionality. ¯C is further split into K representations {¯Sk}K k=1 corresponding to the K support instances where ¯Sk ∈RTk×dh. All ¯Sk and ¯Q are fed into a single-layer Bi-directional LSTM (BLSTM) with dh hidden units along each direction to obtain the final local matching results bSk ∈RTk×2dh and bQ ∈RTq×2dh. Local aggregation aims to convert the results of local matching into a single vector for each query and each support instance. In this paper, we employ a max pooling together with an average pooling, and concatenate their results into one vectorbsk or bq. The calculations are as follows, bsk =[max(bSk); ave(bSk)], ∀k ∈{1, ..., K}, (9) bq =[max( bQ); ave( bQ)], (10) where {bsk, bq} ∈R4dh. 4.3 Instance Matching and Aggregation Similar to conventional prototypical networks (Snell et al., 2017), our proposed method calculates class prototype bs via the representations of all support instances in this class, i.e., {bsk}K k=1. However, instead of using a naive mean operation, we aggregate instance-level representations via attention over {bsk}K k=1, where each weight is derived from the instance matching score between bsk and bq. The matching function is as follow, βk = v⊤(ReLU(W2[bsk; bq])), (11) where W2 ∈Rdh×8dh and v ∈Rdh. βk describes the instance-level matching degree between the query instance q and the support instance sk. Then, all {bsk}K k=1 are aggregated into one vector bs as bs = K X k=1 exp(βk) PK k′=1 exp(β′ k) bsk, (12) and bs is the class prototype. 4.4 Class Matching After the class prototype bs and the embedding vector of the query instance bq have been determined, the class-level matching function f({sk}K k=1, q) in Eq. (2) is defined as f({sk}K k=1, q) = v⊤(ReLU(W2[bs; bq])). (13) Eq. (11) and (13) have the same form. In our experiments, sharing the weights W2 and v in these two equations, i.e., employing the exactly same function for both instance-level and classlevel matching in each training iteration, lead to better performance. 4.5 Joint Training with Inconsistency Measurement If the representations of all support instances in a class are far away from each other, it could become difficult for the derived class prototype to capture the common characteristics of all support instances. Therefore, a function which measures the inconsistency among the set of support instances is designed. In order to avoid the high complexity of directly comparing every two support instances in a class, we calculate the inconsistency measurement as the average Euclidean distance between the support instances and the class prototype as Jincon = 1 NK N X i=1 K X k=1 ||bsi k −bsi||2 2, (14) where i is the class index and || · ||2 calculates the 2-norm of a vector. By combining Eqs. (1) and (14), the final objective function for training the whole model is defined as J = Jmatch + λJincon, (15) where λ is a hyper-parameter and was set as 1 in our experiments without any tuning. 2877 Model 5 Way 1 Shot 5 Way 5 Shot 10 Way 1 Shot 10 Way 5 Shot Meta Network (Han et al., 2018) 64.46 ± 0.54 80.57 ± 0.48 53.96 ± 0.56 69.23 ± 0.52 GNN (Han et al., 2018) 66.23 ± 0.75 81.28 ± 0.62 46.27 ± 0.80 64.02 ± 0.77 SNAIL (Han et al., 2018) 67.29 ± 0.26 79.40 ± 0.22 53.28 ± 0.27 68.33 ± 0.25 Prototypical Network (Han et al., 2018) 69.20 ± 0.20 84.79 ± 0.16 56.44 ± 0.22 75.55 ± 0.19 Proto-HATT (Gao et al., 2019) - 90.12 ± 0.04 - 83.05 ± 0.05 MLMAN 82.98 ± 0.20 92.66 ± 0.09 75.59 ± 0.27 87.29 ± 0.15 Table 2: Accuracies (%) of different models on FewRel test set. 5 Experiments 5.1 Dataset and Evaluation Metrics The few-shot relation classification dataset FewRel2 was adopted in our experiments. This dataset was first generated by distant supervision and then filtered by crowdsourcing to remove noisy annotations. The final FewRel dataset consists of 100 relations, each has 700 instances. The average number of tokens in each sentence is 24.99, and there are 124,577 unique tokens in total. The 100 relations are split into 64, 16 and 20 for training, validation and test respectively. Our experiments investigated four few-shot learning configurations, 5 way 1 shot, 5 way 5 shot, 10 way 1 shot, and 10 way 5 shot, which were the same as Han et al. (2018). According to the official evaluation scripts3, all results given by our experiments were the mean and standard deviation values of 10 training repetitions, and were tested using 20,000 independent samples. 5.2 Training Details and Hyperparameters All of the hyperparameters used in our experiments are listed in Table 3. The 50-dimensional Glove word embeddings released by Pennington et al. (2014) 4 were adopted in the context encoder and were fixed during training. For the unknown words, we just replaced them with an unique special token <UNK> and fixed its embedding as a zero vector. Previous study (Munkhdalai and Yu, 2017) found that the models trained on harder tasks may achieve better performances than using the same configurations at both training and test stages. Therefore, we set N = 20 to construct the train-support sets for 5-way and 10-way tasks. In our experiments, grid searches among dc ∈ {100, 150, 200, 250}, dh ∈{100, 150, 200, 250} 2https://thunlp.github.io/fewrel.html. 3https://thunlp.github.io/fewrel.html. 4https://nlp.stanford.edu/projects/ glove/. Component Parameter Value word embedding dimension 50 position feature max relative distance ±40 dimension 5 CNN window size 3 filter number dc 200 dropout dropout rate 0.2 unidirectional LSTM hidden size dh 100 optimization strategy SGD learning rate 0.1 size of query set R 5 Ntrain 20 λ 1 Table 3: Hyper-parameters of the models built in our experiments. and R ∈{5, 10, 15} were conducted to determine their optimal values. For optimization, we employed mini-batch stochastic gradient descent (SGD) with the initial learning rate of 0.1. The learning rate was decayed to one tenth every 20,000 steps. And also, dropout layers (Hinton et al., 2012) were inserted before CNN and LSTM layers and the drop rate was set as 0.2. 5.3 Comparison with Previous Work Table 2 shows the results of different models tested on FewRel test set. The results of the first four models, Meta Network (Munkhdalai and Yu, 2017), GNN (Garcia and Bruna, 2017), SNAIL (Mishra et al., 2018), Prorotypical Network (Snell et al., 2017), were reported by Han et al. (2018). These models were initially proposed for image classification. Han et al. (2018) just replaced their image encoding module with an instance encoding module and kept other modules unchanged. ProtoHATT (Gao et al., 2019) added hybrid attention mechanism to prototypical networks, mainly focusing on improving the performance on few-shot relation classification with N > 1. From Table 2, we can see that our proposed MLMAN model outperforms all other models by a large margin, which shows the effectiveness of considering the 2878 Model No. 5 Way 1 Shot 5 Way 5 Shot 10 Way 1 Shot 10 Way 5 Shot MLMAN 1 79.01 ± 0.20 88.86 ± 0.20 67.37 ± 0.19 80.07 ± 0.18 -Jincon 2 79.01 ± 0.20 88.33 ± 0.15 67.37 ± 0.19 79.38 ± 0.22 IM(shared →untied) 3 79.01 ± 0.20 86.77 ± 0.19 67.37 ± 0.19 77.66 ± 0.09 IA(att. →max.) 4 79.01 ± 0.20 87.84 ± 0.13 67.37 ± 0.19 78.86 ± 0.15 IA(att. →ave.) 5 79.01 ± 0.20 87.48 ± 0.17 67.37 ± 0.19 78.58 ± 0.23 -Jincon 6 79.01 ± 0.20 86.23 ± 0.22 67.37 ± 0.19 77.36 ± 0.26 LM(-concatenation) 7 79.01 ± 0.20 85.48 ± 0.28 67.37 ± 0.19 74.56 ± 0.36 CM(MLP →ED) 8 76.52 ± 0.23 81.91 ± 0.13 62.89 ± 0.13 69.41 ± 0.15 -LM 9 74.13 ± 0.16 82.73 ± 0.16 59.71 ± 0.22 70.23 ± 0.23 CM(MLP →ED) 10 75.42 ± 0.23 82.36 ± 0.07 62.54 ± 0.26 70.45 ± 0.11 Table 4: Accuracies (%) of different models on FewRel development set. Here, IM stands for instance matching, IA stands for instance aggregation, LM stands for the local matching, CM stands for the class matching, MLP stands for multi-layer perceptrons and ED stands for Euclidean distance. interactions between query instance and support set at multiple levels. 5.4 Ablation Study In order to evaluate the contributions of individual model components, ablation studies were conducted. Table 4 shows the performance of our model and its ablations on the development set of FewRel. Considering that the first 6 ablations only affected the few-shot learning tasks with N > 1, model 2 to model 7 achieved exactly the same performance as the complete model (i.e., model 1) under 5 way 1 shot and 10 way 1 shot configurations. 5.4.1 Instance Matching and Aggregation First, the attention-based instance aggregation introduced in Section 4.3 was replaced with a max pooling (model 4) or an average pooling (model 5). We can see that the model with instance-level attentive aggregation (model 1) outperformed the ones using a max pooling (model 4) or an average pooling (model 5) on 5-shot tasks. Their difference were significantly at 1% significance level in t-test. The advantage of attentive pooling is that the weights of integrating all support instances can be determined dynamically according to the query. For example, when conducting instance matching and aggregation between the query instance and the support set in Table 1, the weights of the 5 instances in class A were 0.03, 0.46, 0.25, 0.08 and 0.18 respectively. Instance #2 achieved the highest weight because it had the best similarity with the query instance and was considered as the most helpful one when matching the query instance with class A. Then, the effectiveness of sharing the weight parameters in Eqs. (11) and (13) was evaluated by untying them (model 3). The performance of model 3 was much worse than the complete model (model 1) as shown in Table 4, which demonstrates the need of sharing the weights for calculating matching scores at both instance and class levels. 5.4.2 Inconsistency Measurement As introduced in Section 4.5, Jincon is designed to measure the inconsistency among the representations of all support instances in a class. After removing Jincon, model 2 was optimized only using the objective function Jmatch. We can see that it performed much worse than the complete model. Furthermore, we calculated the mean of the Euclidean distances between every support instance pair (bsi k,bsi k′) in the same class using model 1 and model 2 respectively. For each support set, the calculation can be written as D = 2 NK(K −1) N X i=1 K X k=1 K X k′=k+1 ||bsi k −bsi k′||2 2. (16) We sampled 20,000 support sets under the 5-way 5-shot configuration and calculated the mean of them. The results were 0.0199 and 0.0346 for model 1 and model 2 respectively, which means that Jincon was effective at forcing the representations of the support instances in the same class to be close with each other. Jincon was further removed from model 5 and model 6 was obtained. It can be found that the accuracy degradation from model 5 to model 6 was larger than the one from model 1 to model 2. This implies that the Jincon objective function also benefited from the attentive aggregation over support instances. 2879 5.4.3 Local Matching First, the concatenation operation in local matching was removed from model 6 in this ablation study. That’s to say, instead of concatenating the representations of all support instances {Sk}K k=1 into one single matrix as Eq. (3), local matching was conducted between the query instance and each support instance separately to get their vector representations {(bsk, bqk); k = 1, ..., K} (model 7). It should be noticed that this led to K different representations of a query instance according to each support class. Then, the mean over k for bsk and bqk were calculated to get the representations of the support set bs and the query instance bq. Comparing model 6 and model 7, we can see that the concatenation operation plays an important role in our model. One possible reason is that the concatenation operation can help local matching to restrain the support instances with low similarity to the query. Second, the whole local matching module together with the concatenation and attentive aggregation operation were removed from model 6, which led to model 9. Model 9 is similar to the one proposed by Snell et al. (2017) that encoded the support and query instances independently. The difference was that model 9 was equipped with more components, including an LSTM layer, two pooling operations, and a learnable class matching function. Comparing the performance of model 6 and model 9 in Table 4, we can see that the local matching operation significantly improves the performance in few-shot relation classification. Fig. 2 shows the attention weight matrix calculated between the query instance and the support instance #2 of class A in Table 1. From this figure, we can see that the attention-based local matching is able to capture some matching relations of local contexts, such as the head entities Eva Funck and Cindy Robbins, the tail entities Gustav and Kimberly Beck, the key phrases son and daughter, the same keyword “married”, and so on. 5.4.4 Class Matching In this experiment, we compared two class matching functions, (1) Euclidean distance (ED) (Snell et al., 2017) and (2) a learnable MLP function as shown by Eq. (13). In order to ignore the influence of the instance-level attentive aggregation, these two matching functions were compared based on model 6 and model 9. After converting the MLP function in model 6 and model 9 to Euclidean disHe married the American actress Cindy Robbins and was stepfather to her daughter , Kimberly Beck . support instance He was married to Eva Funck and they have a son Gustav . query instance Figure 2: The attention weight matrix calculated between the query instance and the support instance #2 of class A in Table 1. The darker units have larger value. The summation of one column in the matrix is one. tance, model 8 and model 10 were obtained. Comparing the performance of these models in Table 4, we have two findings. (1) When local matching was adopted, the learnable MLP for class matching (model 6) outperformed the ED metric (model 8) by a large margin. (2) After removing local matching, the learnable MLP for class matching (model 9) performed not as good as the ED metric (model 10). One possible reason is that the local matching process enhances the interaction between a query instance and a support set when calculating bs and bq. Thus, simple Euclidean distance between them may not be able to describe the complex correlation and dependency between them. On the other hand, MLP mapping is more powerful than calculating Euclidean distance, and can be more appropriate for class matching when local matching is also adopted. 6 Conclusions In this paper, a neural network with multi-level matching and aggregation has been proposed for few-shot relation classification. First, the query and support instances are encoded interactively via local matching and aggregation. Then, the support instances in a class are further aggregated to form the class prototype and the weights are calculated by attention-based instance matching. Finally, a learnable MLP matching function is employed to calculate the class matching score between the query instance and each candidate class. Furthermore, an additional objective function is designed to improve the consistency among the vector rep2880 resentations of all support instances in a class. Experiments have demonstrated the effectiveness of our proposed model, which achieves state-of-theart performance on the FewRel dataset. Studying few-shot relation classification with data generated by distant supervision and extending our MLMAN model to zero-shot learning will be the tasks of our future work. Acknowledgments We thank the anonymous reviewers for their valuable comments. This work was partially funded by the National Nature Science Foundation of China (Grant No. U1636201, 61871358). References Steven Bethard and James H. Martin. 2007. Cutmp: Temporal relation classification using syntactic and semantic features. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 129–132. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Qian Chen, Zhen-Hua Ling, and Xiaodan Zhu. 2018. Enhancing sentence embedding with generalized pooling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1815–1826. Association for Computational Linguistics. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668. Association for Computational Linguistics. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. Association for Computational Linguistics. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400. Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. Victor Garcia and Joan Bruna. 2017. Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043. Yichen Gong, Heng Luo, and Jian Zhang. 2017. Natural language inference over interaction space. arXiv preprint arXiv:1709.04348. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803– 4809. Association for Computational Linguistics. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. S Hochreiter and J Schmidhuber. 1997. Long shortterm memory. Neural Computation, 9(8):1735– 1780. Seonhoon Kim, Jin-Hyuk Hong, Inho Kang, and Nojun Kwak. 2018. Semantic sentence matching with densely-connected recurrent and co-attentive information. arXiv preprint arXiv:1805.11360. Yoon Kim. 2014. Convolutional neural networks for sentence classification. Eprint Arxiv. Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. 2015. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, volume 2. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294. Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. 2018. A simple neural attentive metalearner. Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In AAAI, volume 16, pages 2786–2792. 2881 Tsendsuren Munkhdalai and Hong Yu. 2017. Meta networks. arXiv preprint arXiv:1703.00837. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Sachin Ravi and Hugo Larochelle. 2016. Optimization as a model for few-shot learning. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. Metalearning with memory-augmented neural networks. In International conference on machine learning, pages 1842–1850. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077–4087. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1298– 1307. Association for Computational Linguistics. Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2018. One-shot relational learning for knowledge graphs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1980–1990. Association for Computational Linguistics. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002). Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335–2344. Dublin City University and Association for Computational Linguistics. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 207–212. Association for Computational Linguistics.
2019
277
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2882–2894 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2882 Quantifying Similarity between Relations with Fact Distribution Weize Chen Hao Zhu Xu Han Zhiyuan Liu Maosong Sun Department of Computer Science and Technology, Tsinghua University, Beijing, China State Key Lab on Intelligent Technology and Systems Institute for Artificial Intelligence {wei10,zhuhao15,hanxu17}@mails.tsinghua.edu.cn {liuzy,sms}@tsinghua.edu.cn Abstract We introduce a conceptually simple and effective method to quantify the similarity between relations in knowledge bases. Specifically, our approach is based on the divergence between the conditional probability distributions over entity pairs. In this paper, these distributions are parameterized by a very simple neural network. Although computing the exact similarity is intractable, we provide a sampling-based method to get a good approximation. We empirically show the outputs of our approach significantly correlate with human judgments. By applying our method to various tasks, we also find that (1) our approach could effectively detect redundant relations extracted by open information extraction (Open IE) models, that (2) even the most competitive models for relational classification still make mistakes among very similar relations, and that (3) our approach could be incorporated into negative sampling and softmax classification to alleviate these mistakes. The source code and experiment details of this paper can be obtained from https://github.com/ thunlp/relation-similarity. 1 Introduction Relations1, representing various types of connections between entities or arguments, are the core of expressing relational facts in most general knowledge bases (KBs) (Suchanek et al., 2007; Bollacker et al., 2008). Hence, identifying relations is a crucial problem for several information extraction tasks. Although considerable effort has been devoted to these tasks, some nuances between similar relations Author contributions: Hao Zhu initiated the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author. 1Sometimes relations are also named properties. Sentence The crisis didn’t influence his two daughters OBJ and SUBJ. Correct per:siblings Predicted per:parents Similarity Rank 2 Table 1: An illustration of the errors made by relation extraction models. The sentence contains obvious patterns indicating the two persons are siblings, but the model predicts it as parents. We introduce an approach to measure the similarity between relations. Our result shows “siblings” is the second most similar one to “parents”. By applying this approach, we could analyze the errors made by models, and help reduce errors. are still overlooked, (Table 1 shows an example); on the other hand, some distinct surface forms carrying the same relational semantics are mistaken as different relations. These severe problems motivate us to quantify the similarity between relations in a more effective and robust method. In this paper, we introduce an adaptive and general framework for measuring similarity of the pairs of relations. Suppose for each relation r, we have obtained a conditional distribution, P(h, t | r) (h, t ∈E are head and tail entities, and r ∈R is a relation), over all head-tail entity pairs given r. We could quantify similarity between a pair of relations by the divergence between the conditional probability distributions given these relations. In this paper, this conditional probability is given by a simple feed-forward neural network, which can capture the dependencies between entities conditioned on specific relations. Despite its simplicity, the proposed network is expected to cover various facts, even if the facts are not used for training, owing to the good generalizability of neural networks. For example, our network will assign a fact a higher probability if it is “logical”: e.g., the network might prefer an athlete has the same nationality as same as his/her national team rather than other nations. 2883 Intuitively, two similar relations should have similar conditional distributions over head-tail entity pairs P( h, t | r ), e.g., the entity pairs associated with be trade to and play for are most likely to be athletes and their clubs, whereas those associated with live in are often people and locations. In this paper, we evaluate the similarity between relations based on their conditional distributions over entity pairs. Specifically, we adopt Kullback–Leibler (KL) divergence of both directions as the metric. However, computing exact KL requires iterating over the whole entity pair space E × E, which is quite intractable. Therefore, we further provide a sampling-based method to approximate the similarity score over the entity pair space for computational efficiency. Besides developing a framework for assessing the similarity between relations, our second contribution is that we have done a survey of applications. We present experiments and analysis aimed at answering five questions: (1) How well does the computed similarity score correlate with human judgment about the similarity between relations? How does our approach compare to other possible approaches based on other kinds of relation embeddings to define a similarity? (§3.4 and §5) (2) Open IE models inevitably extract many redundant relations. How can our approach help reduce such redundancy? (§6) (3) To which extent, quantitatively, does best relational classification models make errors among similar relations? (§7) (4) Could similarity be used in a heuristic method to enhance negative sampling for relation prediction? (§8) (5) Could similarity be used as an adaptive margin in softmax-margin training method for relation extraction? (§9) Finally, we conclude with a discussion of valid extensions to our method and other possible applications. 2 Learning Head-Tail Distribution Just as introduced in §1, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution. 2.1 Formal Definition of Fact Distribution A fact is a triple (h, r, t) ∈E × R × E, where h and t are called head and tail entities, r is the relation connecting them, E and R are the sets of entities and relations respectively. We consider a score function Fθ : E × R × E →R maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: Fθ( h, t; r ) ≜uθ1(h; r) + uθ2(t; h, r). We use Fθ to define the unnormalized probability. ˜Pθ( h, t | r ) ≜exp Fθ( h, r; t ) (1) for every triple ( h, r, t ). The real parameter θ can be adjusted to obtain difference distributions over facts. In this paper, we only consider locally normalized version of Fθ: uθ1(h; r) = log exp ˜uθ1(h; r) P h′ exp ˜uθ1(h′; r), uθ2(t; h, r) = log exp ˜uθ2(t; h, r) P t′ exp ˜uθ2(t′; h, r), (2) where ˜uθ1 and ˜uθ2 are directly parameterized by feed-forward neural networks. Through local normalization, ˜Pθ( h, t | r ) is naturally a valid probability distribution, as the partition function P h,t exp Fθ( h, t; r ) = 1. Therefore, Pθ( h, t | r ) = ˜Pθ( h, t | r ). 2.2 Neural architecture design Here we introduce our special design of neural networks. For the first part and the second part, we implement the scoring functions introduced in equation (2) as ˜uθ1(h; r) = MLPθ1(r)⊤h, ˜uθ2(t; h, r) = MLPθ2([h; r])⊤t, (3) where each MLPθ represents a multi-layer perceptron composed of layers like y = relu(W x + b), h, r, t are embeddings of h, r, t, and θ includes weights and biases in all layers. 2.3 Training Now we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters θ∗: θ∗= argmin θ L(G) = argmin θ X ( h,r,t )∈G −log Pθ( h, t | r ), (4) 2884 where G ⊂E × R × E is a set of triples.2 The whole set of parameters, θ = {θ1, θ2, {e, ∀e ∈ E}, {r, ∀r ∈R}}. We train these parameters by Adam optimizer (Kingma and Ba, 2014). Training details are shown in Appendix C. 3 Quantifying Similarity So far, we have talked about how to use neural networks to approximate the natural distribution of facts. The center topic of our paper, quantifying similarity, will be discussed in detail in this section. 3.1 Relations as Distributions In this paper, we provide a probability view of relations by representing relation r as a probability distribution Pθ∗( h, t | r ). After training the neural network on a given set of triples, the model is expected to generalize well on the whole E × R × E space. Note that it is very easy to calculate Pθ∗( h, t | r ) in our model thanks to local normalization (equation (2)). Therefore, we can compute it by Pθ∗( h, t | r ) = exp(uθ1(h; r) + uθ2(t; h, r)). (5) 3.2 Defining Similarity As the basis of our definition, we hypothesize that the similarity between Pθ∗( h, t | r ) reflects the similarity between relations.3 For example, if the conditional distributions of two relations put mass on similar entity pairs, the two relations should be quite similar. If they emphasize different ones, the two should have some differences in meaning. Formally, we define the similarity between two relations as a function of the divergence between the distributions of corresponding head-tail entity pairs: S(r1, r2) = g  DKL ( Pθ∗( h, t | r1 )|| Pθ∗( h, t | r2 )) , DKL ( Pθ∗( h, t | r2 )|| Pθ∗( h, t | r1 ))  , (6) where DKL ( ·|| ·) denotes Kullback–Leibler divergence, DKL ( Pθ∗( h, t | r1 )|| Pθ∗( h, t | r2 )) = Eh,t∼Pθ∗( h,t|r1 ) log Pθ∗( h, t | r1 ) Pθ∗( h, t | r2 ) (7) 2In our applications, the set of triples could be a knowledge base or a set of triples in the training set etc. 3§5 provides empirical results to corroborate this hypothesis. vice versa, and function g(·, ·) is a symmetrical function. To keep the coherence between semantic meaning of “similarity” and our definition, g should be a monotonically decreasing function. Through this paper, we choose to use an exponential family4 composed with max function, i.e., g(x, y) = e−max(x,y). Note that by taking both sides of KL divergence into account, our definition incorporates both the entity pairs with high probability in r1 and r2. Intuitively, if Pθ∗( h, t | r1 ) mainly distributes on a proportion of entities pairs that Pθ∗( h, t | r2 ) emphasizes, r1 is only hyponymy of r2. Considering both sides of KL divergence could help model yield more comprehensive consideration. We will talk about the advantage of this method in detail in §3.4. 3.3 Calculating Similarity Just as introduced in §1, it is intractable to compute similarity exactly, as involving O(|E|2) computation. Hence, we consider the monte-carlo approximation: DKL ( Pθ∗( h, t | r1 )|| Pθ∗( h, t | r2 )) = Eh,t∼Pθ∗( h,t|r1 ) log Pθ∗( h, t | r1 ) Pθ∗( h, t | r2 ) = 1 |S| X h,t∈S log Pθ∗( h, t | r1 ) Pθ∗( h, t | r2 ), (8) where S is a list of entity pairs sampled from Pθ∗( h, t | r1 ). We use sequential sampling5 to gain S, which means we first sample h given r from uθ1(h; r), and then sample t given h and r from uθ2(t; h, r).6 3.4 Relationship with other metrics Previous work proposed various methods for representing relations as vectors (Bordes et al., 2013; Yang et al., 2015), as matrices (Nickel et al., 2011), even as angles (Sun et al., 2019), etc. Based on each of these representations, one could easily define various similarity quantification methods.7 We show in Table 2 the best one of them in each category of relation presentation. Here we provide two intuitive reasons for using our proposed probability-based similarity: (1) 4We view KL divergences as energy functions. 5Sampling h and t at the same time requires O(|E|2) computation, while sequential sampling requires only O(|E|) computation. 6It seems to be a non-symmetrical method, and sampling from the mixture of both forward and backward should yield a better result. Surprisingly, in practice, sampling from single direction works just as well as from both directions. 7Taking the widely used vector representations as an example, we can define the similarity between relations based on cosine distance, dot product distance, L1/L2 distance, etc. 2885 Relation Representation Method Similarity Quantification Vectors TransE (Bordes et al., 2013) S(r1, r2) = exp r⊤ 1 r2/∥r1∥2∥r2∥2  Vectors DistMult (Yang et al., 2015) S(r1, r2) = exp r⊤ 1 r2/∥r1∥2∥r2∥2  Matrices RESCAL (Nickel et al., 2011) S(r1, r2) = exp(∥Mr1 −Mr2∥F ) Angles RotatE (Sun et al., 2019) S(r1, r2) = exp(−Pn i=1|r1,i −r2,i|1) Probability Distribution Ours equation (6) Table 2: Methods to define a similarity function with different types of relation representations Figure 1: Head-tail entity pairs of relation “be an unincorporated community in” (in blue) and “be a small city in” (in red) sampled from our fact distribution model. The coordinates of the points are computed by t-sne (Maaten and Hinton, 2008) on the concatenation of head and tail embeddings8. The two larger blue and red points indicate the embeddings of these two relations. the capacity of a single fixed-size representation is limited — some details about the fact distribution is lost during embedding; (2) directly comparing distributions yields a better interpretability — you can not know about how two relations are different given two relation embeddings, but our model helps you study the detailed differences between probabilities on every entity pair. Figure 1 provides an example. Although the two relations talk about the same topic, they have different meanings. TransE embeds them as vectors the closest to each other, while our model can capture the distinction between the distributions corresponds to the two relations, which could be directly noticed from the figure. 4 Dataset Construction We show the statistics of the dataset we use in Table 3, and the construction procedures will be introduced in this section. 4.1 Wikidata In Wikidata (Vrandeˇci´c and Krötzsch, 2014), facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head 8Embeddings used in this graph are from a trained TransE model. Matrix Vector(TransE) Angle Vector(DistMult) Ours 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Spearman Correlation With Human Judgement Figure 2: Spearman correlations between human judgment and models’ outputs. The inter-subject correlation is also shown as a horizontal line with its standard deviation as an error band. Our model shows the strongest positive correlation with human judgment, and, in other words, the smallest margin with human inter-subject agreement. Significance: ***/**/* := p < .001/.01/.05. entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations. 4.2 ReVerb Extractions ReVerb (Fader et al., 2011) is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia9. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset. 4.3 FB15K and TACRED FB15K (Bordes et al., 2013) is a subset of freebase. TACRED (Zhang et al., 2017) is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied. 9http://reverb.cs.washington.edu/ 2886 5 Human Judgments Following Miller and Charles (1991); Resnik (1999) and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata (Vrandeˇci´c and Krötzsch, 2014)10 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same)11 for each pair. The inter-subject correlation, estimated by leavingone-out method (Weiss and Kulikowski, 1991), is r = 0.763, standard deviation = 0.060. This important reference value (marked in Figure 2) could be seen as the highest expected performance for machines (Resnik, 1999). To get baselines for comparison, we consider other possible methods to define similarity functions, as shown in Table 2. We compute the correlation between these methods and human judgment scores. As the models we have chosen are the ones work best in knowledge base completion, we do expect the similarity quantification approaches based on them could measure some degree of similarity. As shown in Figure 2, the three baseline models could achieve moderate (0.1–0.5) positive correlation. On the other hand, our model shows a stronger correlation (0.63) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments. These results provide evidence for our claim raised in §3.2. 6 Redundant Relation Removal Open IE extracts concise token patterns from plain text to represent various relations between entities, e.g.„ (Mark Twain, was born in, Florida). As Open IE is significant for constructing KBs, many effective extractors have been proposed to extract triples, such as Text-Runner (Yates et al., 2007), ReVerb (Fader et al., 2011), and Standford Open IE (Angeli et al., 2015). However, these extractors only yield relation patterns between entities, without aggregating and clustering their results. Accordingly, there are a fair amount of redundant relation patterns after extracting those relation patterns. Furthermore, the redundant patterns lead to 10Wikidata provides detailed descriptions to properties (relations), which could help subjects understand the relations better. 11The detailed instruction is attached in the Appendix F. Triple Set |R| |E| #Fact Section Wikidata 188 112,946 426,067 §5 and §6.1 ReVerb Extractions 3,736 194,556 266,645 §6.2 FB15K 1,345 14,951 483,142 §7.1 and §8 TACRED 42 29,943 68,124 §7.2 and §9 Table 3: Statistics of the triple sets used in this paper. some redundant relations in KBs. Recently, some efforts are devoted to Open Relation Extraction (Open RE) (Lin and Pantel, 2001; Yao et al., 2011; Marcheggiani and Titov, 2016; ElSahar et al., 2017), aiming to cluster relation patterns into several relation types instead of redundant relation patterns. Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand. On the other hand, these methods still rely on the conventional similarity metrics mentioned above. In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (§6.1). Then, we evaluate our model and baselines on the realworld dataset extracted by Open IE methods (§6.2). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy. 6.1 Toy Experiment In this subsection, we propose a toy environment to verify our similarity-based method. Specifically, we construct a dataset from Wikidata12 and implement Chinese restaurant process13 to split every relation in the dataset into several sub-relations. Then, we filter out those sub-relations appearing less than 50 times to eventually get 1165 relations. All these split relations are regarded as different ones during training, and then different relation similarity metrics are adopted to merge those subrelations into one relation. As Figure 2 shown that the matrices-based approach is less effective than other approaches, we leave this approach out of this experiment. The results are shown in Table 4. 12The construction procedure is shown in §4.1. 13Chinese restaurant process is shown in Appendix B. 2887 Method P R F1 Vectors (TransE) 0.28 0.14 0.18 Vectors (DistMult) 0.44 0.41 0.42 Angles 0.48 0.43 0.45 Ours 0.65 0.50 0.57 Table 4: The experiment results on the toy dataset show that our metric based on probability distribution significantly outperforms other relation similarity metrics. 6.2 Real World Experiment In this subsection, we evaluate various relation similarity metrics on the real-world Open IE patterns. The dataset are constructed by ReVerb. Different patterns will be regarded as different relations during training, and we also adopt various relation similarity metrics to merge similar relation patterns. Because it is nearly impossible to annotate all pattern pairs for their merging or not, meanwhile it is also inappropriate to take distantly supervised annotations as golden results. Hence, we propose a novel metric approximating recall and precision evaluation based on minimal human annotations for evaluation in this experiment. Approximating Recall and Precision Recall Recall is defined as the yielding fraction of true positive instances over the total amount of real positive14 instances. However, we do not have annotations about which pairs of relations are synonymous. Crowdsourcing is a method to obtain a large number of high-quality annotations. Nevertheless, applying crowdsourcing is not trivial in our settings, because it is intractable to enumerate all synonymous pairs in the large space of relation (pattern) pairs O(|R|2) in Open IE. A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers. However, this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency. Fortunately, we could use normalized importance sampling as an alternative to get an unbiased estimation of recall. Theorem 1. 15 Suppose every sample x ∈X has a label f(x) ∈{0, 1}, and the model to be evaluated also gives its prediction ˆf(x) ∈{0, 1}. The recall can be written as Recall = Ex∼UI[ ˆf(x) = 1], (9) where U is the uniform distribution over all samples with f(x) = 1. If we have a proposal distribu14Often called relevant in information retrieval field. 15See proof in Appendix A 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision Ours TransE RotatE DistMult Figure 3: Precision-recall curve on Open IE task comparing our similarity function with vector-based and angle-based similarity. Error bar represents 95% confidential interval. Bootstraping is used to calculate the confidential interval. tion q(x) satisfying ∀x, f(x) = 1 ∧ˆf(x) = 1 ⇒ q(x) ̸= 0, we get an unbiased estimation of recall: Recall ≈ n X i=1 I[ ˆf(xi) = 1] ˆwi, (10) where ˆwi is a normalized version of wi = I[f(xi)=1] ˜q(xi) , where ˜q is the unnormalized version of q, and {xi}n i=1 are i.i.d. drawn from q(x). Precision Similar to equation (9), we can write the expectation form of precision: Precision = Ex∼U′I[f(x) = 1], (11) where U ′ is the uniform distribution over all samples with ˆf(x) = 1. As these samples could be found out by performing models on it. We can simply approximate precision by Monte Carlo Sampling: Precision ≈1 n n X i=1 I[f(xi) = 1], (12) where {xi}n i=1 i.i.d. ∼U ′. In our setting, x = (r1, r2) ∈R × R, f(x) = 1 means r1 and r2 are the same relations, ˆf(x) = 1 means S(r1, r2) is larger than a threshold λ. Results The results on the ReVerb Extractions dataset that we constructed are described in Figure 3. To approximate recall, we use the similarity scores as the proposal distribution ˜q. 500 relation pairs are then drawn from ˜q. To approximate precision, we set thresholds at equal intervals. At each threshold, we uniformly sample 50 to 100 relation pairs whose similarity score given by the model is larger than the threshold. We ask 15 undergraduates to judge 2888 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Similarity Rank 0.050 0.075 0.100 0.125 0.150 0.175 0.200 0.225 Frequency (a) FB15K 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Similarity Rank 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Frequency (b) TACRED Figure 4: Similarity rank distributions of distracting relations on different tasks and datasets. Most of the distracting relations have top similarity rank. Distracting relations are, as defined previously, the relations have a higher rank in the relation classification result than the ground truth. MRR H@3 H@1 Metric 0.90 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 Value Negative Sampling Method Uniform Ours Figure 5: Improvement of using similarity in a heuristic method for negative sampling. MRR denotes the mean reciprocal rank. whether two relations in a relation pair have the same meaning. A relation pair is viewed valid only if 8 of the annotators annotate it as valid. We use the annotations to approximate recall and precision with equation (10) and equation (12). Apart from the confidential interval of precision shown in the figure, the largest 95% confidential interval among thresholds for recall is 0.0416. From the result, we could see that our model performs much better than other models’ similarity by a very large margin. 7 Error Analysis for Relational Classification In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence. 7.1 Relation Prediction We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE (Bordes et al., 2013) as the subject as well as FB15K (Bordes et al., 2013) as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing L = X (h,r,t)∈D [d(h + r, t) −d(h′ + r′, t′) + γ]+, (13) 16The figure is shown in Figure 6 where D is the set of training triples, d(·, ·) is the distance function, (h′, r′, t′)17 is a negative sample with one element different from (h, r, t) uniformly sampled from E × R × E, and γ is the margin. During testing, for each entity pair (h, t), TransE rank relations according to d(h + r, t). For each (h, r, t) in the test set, we call the relations with higher rank scores than r distracting relations. We then compare the similarity between the golden relation and distracting relations. Note that some entity pairs could correspond to more than one relations, in which case we just do not see them as distracting relations. 7.2 Relation Extraction For relation extraction, we consider the supervised relation extraction setting and TACRED dataset (Zhang et al., 2017). As for the subject model, we use the best model on TACRED dataset — positionaware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset. 7.3 Results Figure 4 shows the distribution of similarity ranks of distracting relations of the above mentioned models’ outputs on both relation prediction and relation extraction tasks. From Figures 4a and 4b, we could observe the most distracting relations are the most 17Note that only head and tail entities are changed in the original TransE when doing link prediction. But changing r′ results in better performance when doing relation prediction. 2889 Model P R F1 Traditional Patterns 86.9 23.2 36.6 LR 73.5 49.9 59.4 Neural CNN 75.6 47.5 58.3 CNN-PE 70.3 54.2 61.2 SDP-LSTM (Xu et al., 2015) 66.3 52.7 58.7 LSTM 65.7 59.9 62.7 PA-LSTM (Zhang et al., 2017) 65.7 64.5 65.1 Neural+Ours PA-LSTM (Softmax-Margin Loss) 68.5 64.7 66.6 Table 5: Improvement of using similarity in softmaxmargin loss. similar ones, which corroborate our hypothesis that even the best models on these tasks still make mistakes among the most similar relations. This result also highlights the importance of a heuristic method for guiding models to pay more attention to the boundary between similar relations. We also try to do the negative sampling with relation type constraints, but we see no improvement compared with uniform sampling. The details of negative sampling with relation type constraints are presented in Appendix E. 8 Similarity and Negative Sampling Based on the observation presented in §7.3, we find out that similar relations are often confusing for relation prediction models. Therefore, corrupted triples with similar relations can be used as highquality negative samples. For a given valid triple (h, r, t), we corrupt the triple by substituting r with r′ with the probability, p = S(r, r′)1/α P r′′∈R\{r} S(r, r′′)1/α , (14) where α is the temperature of the exponential function, the bigger the α is, the flatter the probability distribution is. When the temperature approaches infinite, the sampling process reduces to uniform sampling. In training, we set the initial temperature to a high level and gradually reduce the temperature. Intuitively, it enables the model to distinguish among those obviously different relations in the early stage and gives more and more confusing negative triples as the training processes to help the model distinguish the similar relations. This can be also viewed as a process of curriculum learning(Bengio et al., 2009), the data fed to the model gradually changes from simple negative triples to hard ones. We perform relation prediction task on FB15K with TransE. Following Bordes et al. (2013), we use the "Filtered" setting protocol, i.e., filtering out the corrupted triples that appear in the dataset. Our sampling method is shown to improve the model’s performance, especially on Hit@1 (Figure 5). Training details are described in Appendix C. 9 Similarity and Softmax-Margin Loss Similar to §8, we find out that relation extraction models often make wrong preditions on similar relations. In this section, we use similarity as an adaptive margin in softmax-margin loss to improve the performance of relation extraction models. As shown in (Gimpel and Smith, 2010), SoftmaxMargin Loss can be expressed as L = n X i=1 −θT f(x(i), r(i))+ log X r∈R(x(i)) exp{θT f(x(i), r) + cost(r(i), r)}, (15) where R(x) denotes a structured output space for x, and ⟨x(i), r(i)⟩is ith example in training data. We can easily incorporate similarity into cost function cost(r(i), r). In this task, we define the cost function as αS(r(i), r), where α is a hyperparameter. Intuitively, we give a larger margin between similar relations, forcing the model to distinguish among them, and thus making the model perform better. We apply our method to Position-aware Attention LSTM (PA-LSTM)(Zhang et al., 2017), and Table 5 shows our method improves the performance of PA-LSTM. Training details are described in Appendix C. 10 Related Works As many early works devoted to psychology and linguistics, especially those works exploring semantic similarity (Miller and Charles, 1991; Resnik, 1999), researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, Bejar et al. (1991) explicitly defining these relations and Miller (1995) further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity (Turney, 2005, 2006; Zhila et al., 2013; Pedersen, 2012; Rink and Harabagiu, 2012; Mikolov et al., 2013b,a). 2890 With the ongoing development of information extraction and effective construction of KBs (Suchanek et al., 2007; Bollacker et al., 2008; Bizer et al., 2009), relations are further defined as various types of latent connections between objects more than semantic relations. These general relations play a core role in expressing relational facts in the real world. Hence, there are accordingly various methods proposed for discovering more relations and their facts, including open information extraction (Brin, 1998; Agichtein and Gravano, 2000; Ravichandran and Hovy, 2002; Banko et al., 2007; Zhu et al., 2009; Etzioni et al., 2011; Saha et al., 2017) and relation extraction (Riedel et al., 2013; Liu et al., 2013; Zeng et al., 2014; Santos et al., 2015; Zeng et al., 2015; Lin et al., 2016), and relation prediction (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015b,a; Xie et al., 2016). For both semantic relations and general relations, identifying them is a crucial problem, requiring systems to provide a fine-grained relation similarity metric. However, the existing methods suffer from sparse data, which makes it difficult to achieve an effective and stable similarity metric. Motivated by this, we propose to measure relation similarity by leveraging their fact distribution so that we can identify nuances between similar relations, and merge those distant surface forms of the same relations, benefitting the tasks mentioned above. 11 Conclusion and Future Work In this paper, we introduce an effective method to quantify the relation similarity and provide analysis and a survey of applications. We note that there are a wide range of future directions: (1) human prior knowledge could be incorporated into the similarity quantification; (2) similarity between relations could also be considered in multi-modal settings, e.g., extracting relations from images, videos, or even from audios; (3) by analyzing the distributions corresponding to different relations, one can also find some “meta-relations” between relations, such as hypernymy and hyponymy. Acknowledgements This work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), the National Key Research and Development Program of China (No. 2018YFB1004503). Chen and Zhu is supported by Tsinghua University Initiative Scientific Research Program, and Chen is also supported by DCST Student Academic Training Program. Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program. References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of JCDL, pages 85–94. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of ACL, pages 344–354. Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of IJCAI, pages 2670–2676. Isaac I Bejar, Roger Chaffin, and Susan E Embretson. 1991. Cognitive and psychometric analysis of analogical problem solving. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of ICML, pages 41–48. Christian Bizer, Jens Lehmann, Georgi Kobilarov, Sören Auer, Christian Becker, Richard Cyganiak, and Sebastian Hellmann. 2009. Dbpedia-a crystallization point for the web of data. Web Semantics: science, services and agents on the world wide web, 7:154–165. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of SIGMOD, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proceedings of NIPS, pages 2787–2795. Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In Proceedings of WWW, pages 172–183. Hady ElSahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Frederique Laforest. 2017. Unsupervised open relation extraction. In Proceedings of ESWC, pages 12–16. Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam Mausam. 2011. Open information extraction: the second generation. In Proceedings of IJCAI, pages 3–10. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of EMNLP, pages 1535– 1545. 2891 Kevin Gimpel and Noah A Smith. 2010. Softmaxmargin crfs: Training log-linear models with cost functions. In Proceedings of NAACL, pages 733–736. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Dekang Lin and Patrick Pantel. 2001. Dirt@ sbt@ discovery of inference rules from text. In Proceedings of SIGKDDs, pages 323–328. Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling relation paths for representation learning of knowledge bases. In Proceedings of EMNLP, pages 705–714. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of AAAI, pages 2181–2187. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL, pages 2124–2133. ChunYang Liu, WenBo Sun, WenHan Chao, and Wanxiang Che. 2013. Convolution neural network for relation extraction. In Proceedings of ICDM, pages 231–242. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. JMLR, 9:2579–2605. Diego Marcheggiani and Ivan Titov. 2016. Discretestate variational autoencoders for joint discovery and factorization of relations. TACL, 4:231–244. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of ICLR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38:39–41. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and cognitive processes, 6:1–28. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of ICML, pages 809–816. Art B. Owen. 2013. Monte Carlo theory, methods and examples. Ted Pedersen. 2012. Duluth: Measuring degrees of relational similarity with the gloss vector measure of semantic relatedness. In Proceedings of SemEval 2012, pages 497–501. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of ACL, pages 41–47. Philip Resnik. 1999. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of artificial intelligence research, 11:95–130. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of NAACL, pages 74–84. Bryan Rink and Sanda Harabagiu. 2012. Utd: Determining relational similarity using lexical patterns. In Proceedings of SemEval 2012, pages 413–418. Swarnadeep Saha, Harinder Pal, et al. 2017. Bootstrapping for numerical open ie. In Proceedings of ACL, volume 2, pages 317–323. Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of ACL-IJCNLP, pages 626–634. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of WWW, pages 697–706. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In Proceedings of ICLR. Peter D Turney. 2005. Measuring semantic similarity by latent relational analysis. In Proceedings of IJCAI, pages 1136–1141. Peter D Turney. 2006. Similarity of semantic relations. Computational Linguistics, 32:379–416. Denny Vrandeˇci´c and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57:78–85. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of AAAI, pages 1112–1119. Sholom M Weiss and Casimir A Kulikowski. 1991. Computer systems that learn: classification and prediction methods from statistics, neural nets, machine learning, and expert systems. Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Representation learning of knowledge graphs with hierarchical types. In Proceedings of IJCAI, pages 2965–2971. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of EMNLP, pages 1785–1794. 2892 Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of ICLR. Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of EMNLP, pages 1456–1466. Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. Textrunner: open information extraction on the web. In Proceedings of NAACL, pages 25–26. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP, pages 1753–1762. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 2335–2344. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of EMNLP, pages 35–45. Alisa Zhila, Wen-tau Yih, Christopher Meek, Geoffrey Zweig, and Tomas Mikolov. 2013. Combining heterogeneous models for measuring relational similarity. In Proceedings of NAACL, pages 1000–1009. Jun Zhu, Zaiqing Nie, Xiaojiang Liu, Bo Zhang, and JiRong Wen. 2009. Statsnowball: a statistical approach to extracting entity relationships. In Proceedings of WWW, pages 101–110. 2893 A Proofs to theorems in the paper Proof. Recall = P x I[f(x) = 1 ∧ˆf(x) = 1] P x I[f(x) = 1] = X x I[f(x) = 1 ∧ˆf(x) = 1] P x′ I[f(x′) = 1] = X x I[f(x) = 1]I[ ˆf(x) = 1] P x′ I[f(x′) = 1] = X x I[f(x) = 1] P x′ I[f(x′) = 1]I[ ˆf(x) = 1] = X x PU(x)I[ ˆf(x) = 1] = Ex∼UI[ ˆf(x) = 1] (16) If we have a proposal distribution q(x) satisfying ∀x, f(x) = 1 ∧ˆf(x) = 1 ⇒q(x) ̸= 0, then equation (16) can be further written as Recall = Ex∼qI[ ˆf(x) = 1]PU(x) q(x) (17) Sometimes, it’s hard for us to compute normalized probability q. To tackle this problem, consider selfnormalized importance sampling as an unbiased estimation (Owen, 2013), Ex∼qI[ ˆf(x) = 1]PU(x) q(x) ≈ Pn i=1 I[ ˆf(xi) = 1]PU(xi)/q(xi) Pn i=1 PU(xi)/q(xi) = Pn i=1 I[ ˆf(xi) = 1]wi Pn i=1 wi (wi = I[f(xi) = 1] ˜q(xi) ) = n X i=1 I[ ˆf(xi) = 1] ˆwi, (18) where ˆwi is the normalized version of w. B Chinese Restaurant Process Specifically, for a relation r with currently m subrelations, we turn it to a new sub-relation with probability p = α α + n + 1 (19) or to the kth existing sub-relation with probability p = nk α + n + 1 (20) where nk is the size of kth existing sub-relation, n is the sum of the number of all sub-relationships of r, and α is a hyperparameter, in which case we use α = 1. 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Standard Deviation Our Model TransE RotatE DistMult Figure 6: The recall standard deviation of different models. C Training Details In Wikidata and ReVerb Extractions dataset, we manually split a validation set, assuring every entity and relation appears in validation set also appears in training set. While minimizing loss on the training set, we observe the loss on the validation set and stop training as validation loss stops to decrease. Before training our model on any dataset, we use the entity embeddings and relation embeddings produced by TransE on the dataset as the pretrained embeddings for our model. C.1 Training Details on Negative Sampling The sampling is launched with an initial temperature of 8192. The temperature drops to half every 200 epochs and remains stable once it hits 16. Optimization is performed using SGD, with a learning rate of 1e-3. C.2 Training Details on Softmax-Margin Loss The sampling is launching with an initial temperature of 64. The temperature drops by 20% per epoch, and remains stable once it hits 16. The alpha we use is 9. Optimization is performed using SGD, with a learning rate of 1. D Recall Standard Deviation As is shown in Figure 6, the max recall standard deviation for our model is 0.4, and 0.11 for TransE. E Negative Samplilng with Relation Type Constraints In FB15K, if two relations have same prefix, we regard them as belonging to a same type, e.g., both /film/film/starring./film/performance/actor and 2894 /film/actor/film./film/performance/film have prefix film, they belong to same type. Similar to what is mentioned in §8, we expect the model first to learn to distinguish among obviously different relations, and gradually learn to distinguish similar relations. Therefore, we conduct negative sampling with relation type constraints in two ways. E.1 Add Up Two Uniform Distribution For each triple (h, r, t), we have two uniform distribution Uall and Utype. Uall is the uniform distribution over all the relations except for those appear with (h, t) in the knowledge base, and Utype is the uniform distribution over the relations of the same type as r. When corrupting the triple, we sample r′ from the distribution: U = αUall + (1 −α)Utype, (21) where α is a hyperparameter. We set α to 1 at the beginning of training, and every k epochs, α will be multiplied by decrease rate γ. We do grid search for k ∈{50, 70, 100} and γ ∈{0.9, 0.95, 0.98}, but no improvement is observed. E.2 Add Weight We speculate that the unsatisfactory result produced by adding up two uniform distribution is because that for those types with few relations in it, a small change of α will result in a significant change in U. Therefore, when sampling a negative r′, we add weights to relations that are of the same type as r instead. Concretely, we substitute r with r′ with probability p, which can be calculated as: p = ( 1+ϵ N r′ ∈T (r) 1 N otherwise (22) where T (r) denotes all the relations that are the same type as r, ϵ is a hyperparameter and N is a normalizing constant. We set ϵ to 0 at the beginning of training, and every k epochs, ϵ will increase by γ. We do grid search for k ∈{50, 70, 100} and γ ∈0.5, 1, still no improvement is observed. F Wikidata annotation guidance We show the guidance provided for the annotators here. • A pair of relations should be marked as 4 points if the two relations are only two different expressions for a certain meaning. Example: (study at, be educated at) • A pair of relations should be marked as 3 points if the two relations are describing a same topic, and the entities that the two relations connect are of same type respectively. Example: (be the director of, be the screenwriter of), both relations relate to movie, and the types of the entities they connect are both (person, movie). • A pair of relations should be marked as 2 points if the two relations are describing a same topic, but the entities that the two relations connect are of different type respectively. Example: (be headquartered in, be founded in), both relations relate to organization, but the types of the entities they connect are different, i.e., (company, location) and (company, time) • A pair of relations should be marked as 1 points if the two relations do not meet the conditions above but still have semantic relation. Example: (be the developer of, be the employer of) • A pair of relations should be marked as 0 points if the two relations do not have any connection. Example: (be a railway station locates in, be published in)
2019
278
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895–2905 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2895 Matching the Blanks: Distributional Similarity for Relation Learning Livio Baldini Soares Nicholas FitzGerald Jeffrey Ling∗ Tom Kwiatkowski Google Research {liviobs,nfitz,jeffreyling,tomkwiat}@google.com Abstract General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris’ distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text. We show that these representations significantly outperform previous work on exemplar based relation extraction (FewRel) even without using any of that task’s training data. We also show that models initialized with our task agnostic representations, and then tuned on supervised relation extraction datasets, significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED. 1 Introduction Reading text to identify and extract relations between entities has been a long standing goal in natural language processing (Cardie, 1997). Typically efforts in relation extraction fall into one of three groups. In a first group, supervised (Kambhatla, 2004; GuoDong et al., 2005; Zeng et al., 2014), or distantly supervised relation extractors (Mintz et al., 2009) learn a mapping from text to relations in a limited schema. Forming a second group, open information extraction removes the limitations of a predefined schema by instead representing relations using their surface forms (Banko et al., 2007; Fader et al., 2011; Stanovsky et al., 2018), which increases scope but also leads ∗Work done as part of the Google AI residency. to an associated lack of generality since many surface forms can express the same relation. Finally, the universal schema (Riedel et al., 2013) embraces both the diversity of text, and the concise nature of schematic relations, to build a joint representation that has been extended to arbitrary textual input (Toutanova et al., 2015), and arbitrary entity pairs (Verga and McCallum, 2016). However, like distantly supervised relation extractors, universal schema rely on large knowledge graphs (typically Freebase (Bollacker et al., 2008)) that can be aligned to text. Building on Lin and Pantel (2001)’s extension of Harris’ distributional hypothesis (Harris, 1954) to relations, as well as recent advances in learning word representations from observations of their contexts (Mikolov et al., 2013; Peters et al., 2018; Devlin et al., 2018), we propose a new method of learning relation representations directly from text. First, we study the ability of the Transformer neural network architecture (Vaswani et al., 2017) to encode relations between entity pairs, and we identify a method of representation that outperforms previous work in supervised relation extraction. Then, we present a method of training this relation representation without any supervision from a knowledge graph or human annotators by matching the blanks. [BLANK], inspired by Cale’s earlier cover, recorded one of the most acclaimed versions of “[BLANK]” [BLANK]’s rendition of “[BLANK]” has been called “one of the great songs” by Time, and is included on Rolling Stone’s list of “The 500 Greatest Songs of All Time”. Figure 1: “Matching the blanks” example where both relation statements share the same two entities. Following Riedel et al. (2013), we assume access to a corpus of text in which entities have been 2896 linked to unique identifiers and we define a relation statement to be a block of text containing two marked entities. From this, we create training data that contains relation statements in which the entities have been replaced with a special [BLANK] symbol, as illustrated in Figure 1. Our training procedure takes in pairs of blank-containing relation statements, and has an objective that encourages relation representations to be similar if they range over the same pairs of entities. After training, we employ learned relation representations to the recently released FewRel task (Han et al., 2018) in which specific relations, such as ‘original language of work’ are represented with a few exemplars, such as The Crowd (Italian: La Folla) is a 1951 Italian film. Han et al. (2018) presented FewRel as a supervised dataset, intended to evaluate models’ ability to adapt to relations from new domains at test time. We show that through training by matching the blanks, we can outperform Han et al. (2018)’s top performance on FewRel, without having seen any of the FewRel training data. We also show that a model pre-trained by matching the blanks and tuned on FewRel outperforms humans on the FewRel evaluation. Similarly, by training by matching the blanks and then tuning on labeled data, we significantly improve performance on the SemEval 2010 Task 8 (Hendrickx et al., 2009), KBP-37 (Zhang and Wang, 2015), and TACRED (Zhang et al., 2017) relation extraction benchmarks. 2 Overview Task definition In this paper, we focus on learning mappings from relation statements to relation representations. Formally, let x = [x0 . . . xn] be a sequence of tokens, where x0 = [CLS] and xn = [SEP] are special start and end markers. Let s1 = (i, j) and s2 = (k, l) be pairs of integers such that 0 < i < j −1, j < k, k ≤l −1, and l ≤ n. A relation statement is a triple r = (x, s1, s2), where the indices in s1 and s2 delimit entity mentions in x: the sequence [xi . . . xj−1] mentions an entity, and so does the sequence [xk . . . xl−1]. Our goal is to learn a function hr = fθ(r) that maps the relation statement to a fixed-length vector hr ∈Rd that represents the relation expressed in x between the entities marked by s1 and s2. Contributions This paper contains two main contributions. First, in Section 3.1 we investigate different architectures for the relation encoder fθ, all built on top of the widely used Transformer sequence model (Devlin et al., 2018; Vaswani et al., 2017). We evaluate each of these architectures by applying them to a suite of relation extraction benchmarks with supervised training. Our second, more significant, contribution— presented in Section 4—is to show that fθ can be learned from widely available distant supervision in the form of entity linked text. 3 Architectures for Relation Learning The primary goal of this work is to develop models that produce relation representations directly from text. Given the strong performance of recent deep transformers trained on variants of language modeling, we adopt Devlin et al. (2018)’s BERT model as the basis for our work. In this section, we explore different methods of representing relations with the Transformer model. 3.1 Relation Classification and Extraction Tasks We evaluate the different methods of representation on a suite of supervised relation extraction benchmarks. The relation extractions tasks we use can be broadly categorized into two types: fully supervised relation extraction, and few-shot relation matching. For the supervised tasks, the goal is to, given a relation statement r, predict a relation type t ∈T where T is a fixed dictionary of relation types and t = 0 typically denotes a lack of relation between the entities in the relation statement. For this type of task we evaluate on SemEval 2010 Task 8 (Hendrickx et al., 2009), KBP-37 (Zhang and Wang, 2015) and TACRED (Zhang et al., 2017). More formally, In the case of few-shot relation matching, a set of candidate relation statements are ranked, and matched, according to a query relation statement. In this task, examples in the test and development sets typically contain relation types not present in the training set. For this type of task, we evaluate on the FewRel (Han et al., 2018) dataset. Specifically, we are given K sets of N labeled relation statements Sk = {(r0, t0) . . . (rN, tN)} where ti ∈{1 . . . K} is the corresponding relation type. The goal is to predict the tq ∈{1 . . . K} for a query relation statement rq. 2897 Relation Statement Per class representation Softmax Deep Transformer Encoder Linear or Norm Layer Similarity score Deep Transformer Encoder Linear or Norm Layer Deep Transformer Encoder Linear or Norm Layer Query Relation Statement Candidate Relation Statement Figure 2: Illustration of losses used in our models. The left figure depicts a model suitable for supervised training, where the model is expected to classify over a predefined dictionary of relation types. The figure on the right depicts a pairwise similarity loss used for few-shot classification task. [CLS] Entity 1 … ... Entity 2 … . [SEP] Deep Transformer (BERT) [CLS] Entity 1 … ... Entity 2 … . [SEP] Deep Transformer (BERT) [CLS] … Entity 1 ... ... Entity 2 … [SEP] Deep Transformer (BERT) 0 1 Token type embeddings 0 0 1 2 2 (a) STANDARD – [CLS] (b) STANDARD – MENTION POOLING (c) POSITIONAL EMB. – MENTION POOL. [CLS] [E1] Entity 1 [/E1] … ... [E2] Entity 2 [/E2] [SEP] Deep Transformer (BERT) [CLS] [E1] Entity 1 [/E1] … ... [E2] Entity 2 [/E2] [SEP] Deep Transformer (BERT) [CLS] [E1] Entity 1 [/E1] … ... [E2] Entity 2 [/E2] [SEP] Deep Transformer (BERT) (d) ENTITY MARKERS – [CLS] (e) ENTITY MARKERS – MENTION POOL. (f) ENTITY MARKERS – ENTITY START Figure 3: Variants of architectures for extracting relation representations from deep Transformers network. Figure (a) depicts a model with STANDARD input and [CLS] output, Figure (b) depicts a model with STANDARD input and MENTION POOLING output and Figure (c) depicts a model with POSITIONAL EMBEDDINGS input and MENTION POOLING output. Figures (d), (e), and (f) use ENTITY MARKERS input while using [CLS], MENTION POOLING, and ENTITY START output, respectively. SemEval 2010 KBP37 TACRED FewRel Task 8 5-way-1-shot # training annotated examples 8,000 (6,500 for dev) 15,916 68,120 44,800 # relation types 19 37 42 100 Dev F1 Test F1 Dev F1 Test F1 Dev F1 Test F1 Dev Acc. Wang et al. (2016)* – 88.0 – – – – – Zhang and Wang (2015)* – 79.6 – 58.8 – – – Bilan and Roth (2018)* – 84.8 – – – 68.2 – Han et al. (2018) – – – – – – 71.6 Input type Output type STANDARD [CLS] 71.6 – 41.3 – 23.4 – 85.2 STANDARD MENTION POOL. 78.8 – 48.3 – 66.7 – 87.5 POSITIONAL EMB. MENTION POOL. 79.1 – 32.5 – 63.9 – 87.5 ENTITY MARKERS [CLS] 81.2 – 68.7 – 65.7 – 85.2 ENTITY MARKERS MENTION POOL. 80.4 – 68.2 – 69.5 – 87.6 ENTITY MARKERS ENTITY START 82.1 89.2 70 68.3 70.1 70.1 88.9 Table 1: Results for supervised relation extraction tasks. Results on rows where the model name is marked with a * symbol are reported as published, all other numbers have been computed by us. SemEval 2010 Task 8 does not establish a default split for development; for this work we use a random slice of the training set with 1,500 examples. 2898 3.2 Relation Representations from Deep Transformers Model In all experiments in this section, we start with the BERTLARGE model made available by Devlin et al. (2018) and train towards task-specific losses. Since BERT has not previously been applied to the problem of relation representation, we aim to answer two primary modeling questions: (1) how do we represent entities of interest in the input to BERT, and (2) how do we extract a fixed length representation of a relation from BERT’s output. We present three options for both the input encoding, and the output relation representation. Six combinations of these are illustrated in Figure 3. 3.2.1 Entity span identification Recall, from Section 2, that the relation statement r = (x, s1, s2) contains the sequence of tokens x and the entity span identifiers s1 and s2. We present three different options for getting information about the focus spans s1 and s2 into our BERT encoder. Standard input First we experiment with a BERT model that does not have access to any explicit identification of the entity spans s1 and s2. We refer to this choice as the STANDARD input. This is an important reference point, since we believe that BERT has the ability to identify entities in x, but with the STANDARD input there is no way of knowing which two entities are in focus when x contains more than two entity mentions. Positional embeddings For each of the tokens in its input, BERT also adds a segmentation embedding, primarily used to add sentence segmentation information to the model. To address the STANDARD representation’s lack of explicit entity identification, we introduce two new segmentation embeddings, one that is added to all tokens in the span s1, while the other is added to all tokens in the span s2. This approach is analogous to previous work where positional embeddings have been applied to relation extraction (Zhang et al., 2017; Bilan and Roth, 2018). Entity marker tokens Finally, we augment x with four reserved word pieces to mark the begin and end of each entity mention in the relation statement. We introduce the [E1start], [E1end], [E2start] and [E2end] and modify x to give ˜x =[x0 . . . [E1start] xi . . . xj−1 [E1end] . . . [E2start] xk . . . xl−1 [E2end] . . . xn]. and we feed this token sequence into BERT instead of x. We also update the entity indices ˜s1 = (i + 1, j + 1) and ˜s2 = (k + 3, l + 3) to account for the inserted tokens. We refer to this representation of the input as ENTITY MARKERS. 3.3 Fixed length relation representation We now introduce three separate methods of extracting a fixed length relation representation hr from the BERT encoder. The three variants rely on extracting the last hidden layers of the transformer network, which we define as H = [h0, ...hn] for n = |x| (or |˜x| if entity marker tokens are used). [CLS] token Recall from Section 2 that each x starts with a reserved [CLS] token. BERT’s output state that corresponds to this token is used by Devlin et al. (2018) as a fixed length sentence representation. We adopt the [CLS] output, h0, as our first relation representation. Entity mention pooling We obtain hr by maxpooling the final hidden layers corresponding to the word pieces in each entity mention, to get two vectors he1 = MAXPOOL([hi...hj−1]) and he2 = MAXPOOL([hk...hl−1]) representing the two entity mentions. We concatenate these two vectors to get the single representation hr = ⟨he1|he2⟩ where ⟨a|b⟩is the concatenation of a and b. We refer to this architecture as MENTION POOLING. Entity start state Finally, we propose simply representing the relation between two entities with the concatenation of the final hidden states corresponding their respective start tokens, when ENTITY MARKERS are used. Recalling that ENTITY MARKERS inserts tokens in x, creating offsets in s1 and s2, our representation of the relation is rh = ⟨hi|hj+2⟩. We refer to this output representation as ENTITY START output. Note that this can only be applied to the ENTITY MARKERS input. Figure 3 illustrates a few of the variants we evaluated in this section. In addition to defining the model input and output architecture, we fix the training loss used to train the models (which is illustrated in Figure 2). In all models, the output representation from the Transformer network is fed into a fully connected layer that either (1) 2899 contains a linear activation, or (2) performs layer normalization (Ba et al., 2016) on the representation. We treat the choice of post Transfomer layer as a hyper-parameter and use the best performing layer type for each task. For the supervised tasks, we introduce a new classification layer W ∈RKxH where H is the size of the relation representation and K is the number of relation types. The classification loss is the standard cross entropy of the softmax of hrW T with respect to the true relation type. For the few-shot task, we use the dot product between relation representation of the query statement and each of the candidate statements as a similarity score. In this case, we also apply a cross entropy loss of the softmax of similarity scores with respect to the true class. We perform task-specific fine-tuning of the BERT model, for all variants, with the following set of hyper-parameters: • Transformer Architecture: 24 layers, 1024 hidden size, 16 heads • Weight Initialization: BERTLARGE • Post Transformer Layer: Dense with linear activation (KBP-37 and TACRED), or Layer Normalization layer (SemEval 2010 and FewRel). • Training Epochs: 1 to 10 • Learning Rate (supervised): 3e-5 with Adam • Batch Size (supervised): 64 • Learning Rate (few shot): 1e-4 with SGD • Batch Size (few shot): 256 Table 1 shows the results of model variants on the three supervised relation extraction tasks and the 5-way-1-shot variant of the few-shot relation classification task. For all four tasks, the model using the ENTITY MARKERS input representation and ENTITY START output representation achieves the best scores. From the results, it is clear that adding positional information in the input is critical for the model to learn useful relation representations. Unlike previous work that have benefited from positional embeddings (Zhang et al., 2017; Bilan and Roth, 2018), the deep Transformers benefits the most from seeing the new entity boundary word pieces (ENTITY MARKERS). It is also worth noting that the best variant outperforms previous published models on all four tasks. For the remainder of the paper, we will use this architecture when further training and evaluating our models. 4 Learning by Matching the Blanks So far, we have used human labeled training data to train our relation statement encoder fθ. Inspired by open information extraction (Banko et al., 2007; Angeli et al., 2015), which derives relations directly from tagged text, we now introduce a new method of training fθ without a predefined ontology, or relation-labeled training data. Instead, we declare that for any pair of relation statements r and r′, the inner product fθ(r)⊤fθ(r′) should be high if the two relation statements, r and r′, express semantically similar relations. And, this inner product should be low if the two relation statements express semantically different relations. Unlike related work in distant supervision for information extraction (Hoffmann et al., 2011; Mintz et al., 2009), we do not use relation labels at training time. Instead, we observe that there is a high degree of redundancy in web text, and each relation between an arbitrary pair of entities is likely to be stated multiple times. Subsequently, r = (x, s1, s2) is more likely to encode the same semantic relation as r′ = (x′, s′ 1, s′ 2) if s1 refers to the same entity as s′ 1, and s2 refers to the same entity as s′ 2. Starting with this observation, we introduce a new method of learning fθ from entity linked text. We introduce this method of learning by matching the blanks (MTB). In Section 5 we show that MTB learns relation representations that can be used without any further tuning for relation extraction—even beating previous work that trained on human labeled data. 4.1 Learning Setup Let E be a predefined set of entities. And let D = [(r0, e0 1, e0 2) . . . (rN, eN 1 , eN 2 )] be a corpus of relation statements that have been labeled with two entities ei 1 ∈E and ei 2 ∈E. Recall, from Section 2, that ri = (xi, si 1, si 2), where si 1 and si 2 delimit entity mentions in xi. Each item in D is created by pairing the relation statement ri with the two entities ei 1 and ei 2 corresponding to the spans si 1 and si 2, respectively. We aim to learn a relation statement encoder fθ that we can use to determine whether or not two relation statements encode the same relation. To do this, we define the following binary classifier p(l = 1|r, r′) = 1 1 + exp fθ(r)⊤fθ(r′) to assign a probability to the case that r and r′ encode the same relation (l = 1), or not (l = 0). We will then learn the parameterization of fθ that 2900 rA In 1976, e1 (then of Bell Labs) published e2, the first of his books on programming inspired by the Unix operating system. rB The “e2” series spread the essence of “C/Unix thinking” with makeovers for Fortran and Pascal. e1’s Ratfor was eventually put in the public domain. rC e1 worked at Bell Labs alongside e3 creators Ken Thompson and Dennis Ritchie. Mentions e1 = Brian Kernighan, e2 = Software Tools, e3 = Unix Table 2: Example of “matching the blanks” automatically generated training data. Statement pairs rA and rB form a positive example since they share resolution of two entities. Statement pairs rA and rC as well as rB and rC form strong negative pairs since they share one entity in common but contain other non-matching entities. minimizes the loss L(D) = −1 |D|2 X (r,e1,e2)∈D X (r′,e′ 1,e′ 2)∈D (1) δe1,e′ 1δe2,e′ 2 · log p(l = 1|r, r′)+ (1 −δe1,e′ 1δe2,e′ 2) · log(1 −p(l = 1|r, r′)) where δe,e′ is the Kronecker delta that takes the value 1 iff e = e′, and 0 otherwise. 4.2 Introducing Blanks Readers may have noticed that the loss in Equation 1 can be minimized perfectly by the entity linking system used to create D. And, since this linking system does not have any notion of relations, it is not reasonable to assume that fθ will somehow magically build meaningful relation representations. To avoid simply relearning the entity linking system, we introduce a modified corpus ˜D = [(˜r0, e0 1, e0 2) . . . (˜rN, eN 1 , eN 2 )] where each ˜ri = (˜xi, si 1, si 2) contains a relation statement in which one or both entity mentions may have been replaced by a special [BLANK] symbol. Specifically, ˜x contains the span defined by s1 with probability α. Otherwise, the span has been replaced with a single [BLANK] symbol. The same is true for s2. Only α2 of the relation statements in ˜D explicitly name both of the entities that participate in the relation. As a result, minimizing L( ˜D) requires fθ to do more than simply identifying named entities in r. We hypothesize that training on ˜D will result in a fθ that encodes the semantic relation between the two possibly elided entity spans. Results in Section 5 support this hypothesis. 4.3 Matching the Blanks Training To train a model with matching the blank task, we construct a training setup similar to BERT, where two losses are used concurrently: the masked language model loss and the matching the blanks loss. For generating the training corpus, we use English Wikipedia and extract text passages from the HTML paragraph blocks, ignoring lists, and tables. We use an off-the-shelf entity linking system1 to annotate text spans with a unique knowledge base identifier (e.g., Freebase ID or Wikipedia URL). The span annotations include not only proper names, but other referential entities such as common nouns and pronouns. From this annotated corpus we extract relation statements where each statement contains at least two grounded entities within a fixed sized window of tokens2. To prevent a large bias towards relation statements that involve popular entities, we limit the number of relation statements that contain the same entity by randomly sampling a constant number of relation statements that contain any given entity. We use these statements to train model parameters to minimize L( ˜D) as described in the previous section. In practice, it is not possible to compare every pair of relation statements, as in Equation 1, and so we use a noise-contrastive estimation (Gutmann and Hyv¨arinen, 2012; Mnih and Kavukcuoglu, 2013). In this estimation, we consider all positive pairs of relation statements that contain the same entity, so there is no change to the contribution of the first term in Equation 1—where δe1,e′ 1δe2,e′ 2 = 1. The approximation does, however, change the contribution of the second term. Instead of summing over all pairs of relation statements that do not contain the same pair of entities, we sample a set of negatives that are either randomly sampled uniformly from the set of all relation statement pairs, or are sampled from the 1We use the public Google Cloud Natural Language API to annotate our corpus extracting the “entity analysis” results — https://cloud.google.com/natural-language/ docs/basics#entity analysis . 2We use a window of 40 tokens, which we observed provides some coverage of long range entity relations, while avoiding a large number of co-occurring but unrelated entities. 2901 5-way 5-way 10-way 10-way 1-shot 5-shot 1-shot 5-shot Proto Net 69.2 84.79 56.44 75.55 BERTEM+MTB 93.9 97.1 89.2 94.3 Human 92.22 – 85.88 – Table 3: Test results for FewRel few-shot relation classification task. Proto Net is the best published system from Han et al. (2018). At the time of writing, our BERTEM+MTB model outperforms the top model on the leaderboard (http://www.zhuhao.me/fewrel/) by over 10% on the 5-way-1-shot and over 15% on the 10way-1-shot configurations. set of relation statements that share just a single entity. We include the second set ‘hard’ negatives to account for the fact that most randomly sampled relation statement pairs are very unlikely to be even remotely topically related, and we would like to ensure that the training procedure sees pairs of relation statements that refer to similar, but different, relations. Finally, we probabilistically replace each entity’s mention with [BLANK] symbols, with a probability of α = 0.7, as described in Section 3.2, to ensure that the model is not confounded by the absence of [BLANK] symbols in the evaluation tasks. In total, we generate 600 million relation statement pairs from English Wikipedia, roughly split between 50% positive and 50% strong negative pairs. 5 Experimental Evaluation In this section, we evaluate the impact of training by matching the blanks. We start with the best BERT based model from Section 3.3, which we call BERTEM, and we compare this to a variant that is trained with the matching the blanks task (BERTEM+MTB). We train the BERTEM+MTB model by initializing the Transformer weights to the weights from BERTLARGE and use the following parameters: • Learning rate: 3e-5 with Adam • Batch size: 2,048 • Number of steps: 1 million • Relation representation: ENTITY MARKER We report results on all of the tasks from Section 3.1, using the same task-specific training methodology for both BERTEM and BERTEM+MTB. 5.1 Few-shot Relation Matching First, we investigate the ability of BERTEM+MTB to solve the FewRel task without any task-specific SemEval 2010 KBP37 TACRED SOTA 84.8 58.8 68.2 BERTEM 89.2 68.3 70.1 BERTEM+MTB 89.5 69.3 71.5 Table 4: F1 scores of BERTEM+MTB and BERTEM based relation classifiers on the respective test sets. Details of the SOTA systems are given in Table 1. training data. Since FewRel is an exemplar-based approach, we can just rank each candidate relation statement according to its representation’s inner product with the exemplars’ representations. Figure 4 shows that the task agnostic BERTEM and BERTEM+MTB models outperform the previous published state of the art on FewRel task even when they have not seen any FewRel training data. For BERTEM+MTB, the increase over Han et al. (2018)’s supervised approach is very significant— 8.8% on the 5-way-1-shot task and 12.7% on the 10-way-1-shot task. BERTEM+MTB also significantly outperforms BERTEM in this unsupervised setting, which is to be expected since there is no relation-specific loss during BERTEM’s training. To investigate the impact of supervision on BERTEM and BERTEM+MTB, we introduce increasing amounts of FewRel’s training data. Figure 4 shows the increase in performance as we either increase the number of training examples for each relation type, or we increase the number of relation types in the training data. When given access to all of the training data, BERTEM approaches BERTEM+MTB’s performance. However, when we keep all relation types during training, and vary the number of types per example, BERTEM+MTB only needs 6% of the training data to match the performance of a BERTEM model trained on all of the training data. We observe that maintaining a diversity of relation types, and reducing the number of examples per type, is the most effective way to reduce annotation effort for this task. The results in Figure 4 show that MTB training could be used to significantly reduce effort in implementing an exemplar based relation extraction system. Finally, we report BERTEM+MTB’s performance on all of FewRel’s fully supervised tasks in Table 3. We see that it outperforms the human upper bound reported by Han et al. (2018), and it significantly outperforms all other submissions to the FewRel leaderboard, published or unpublished. 2902 examples per relation type (log scale) Accuracy 60 65 70 75 80 85 0 5 10 20 40 80 160 320 700 BERTᴇᴍ BERTᴇᴍ+MTB number of relation types Accuracy 60 65 70 75 80 85 0 20 40 60 BERTᴇᴍ BERTᴇᴍ+MTB 5 way 1 shot # examples per type 0 5 20 80 320 700 Prot.Net. (CNN) – – – – – 71.6 BERTEM 72.9 81.6 85.1 86.9 88.8 88.9 BERTEM+MTB 80.4 85.5 88.4 89.6 89.6 90.1 10 way 1 shot # examples per type 0 5 20 80 320 700 Prot.Net. (CNN) – – – – – 58.8 BERTEM 62.3 72.8 76.9 79.0 81.4 82.8 BERTEM+MTB 71.5 78.1 81.2 82.9 83.7 83.4 5 way 1 shot # training types 0 5 16 32 64 Prot.Net. (CNN) – – – – 71.6 BERTEM 72.9 78.4 81.2 83.4 88.9 BERTEM+MTB 80.4 84.04 85.5 86.8 90.1 10 way 1 shot # training types 0 5 16 32 64 Prot.Net. (CNN) – – – – 58.8 BERTEM 62.3 68.9 71.9 74.3 81.4 BERTEM+MTB 71.5 76.2 76.9 78.5 83.7 Figure 4: Comparison of classifiers tuned on FewRel. Results are for the development set while varying the amount of annotated examples available for fine-tuning. On the left, we display accuracies while varying the number of examples per relation type, while maintaining all 64 relations available for training. On the right, we display accuracy on the development set of the two models while varying the total number of relation types available for tuning, while maintaining all 700 examples per relation type. In both graphs, results for the 10-way-1-shot variant of the task are displayed. % of training set 1% 10% 20% 50% 100% SemEval 2010 Task 8 BERTEM 28.6 66.9 75.5 80.3 82.1 BERTEM+MTB 31.2 70.8 76.2 80.4 82.7 KBP-37 BERTEM 40.1 63.6 65.4 67.8 69.5 BERTEM+MTB 44.2 66.3 67.2 68.8 70.3 TACRED BERTEM 32.8 59.6 65.6 69.0 70.1 BERTEM+MTB 43.4 64.8 67.2 69.9 70.6 Table 5: F1 scores on development sets for supervised relation extraction tasks while varying the amount of tuning data available to our BERTEM and BERTEM+MTB models. 5.2 Supervised Relation Extraction Table 4 contains results for our classifiers tuned on supervised relation extraction data. As was established in Section 3.2, our BERTEM based classifiers outperform previously published results for these three tasks. The additional MTB based training further increases F1 scores for all tasks. We also analyzed the performance of our two models while reducing the amount of supervised task specific tuning data. The results displayed in Table 5 show the development set performance when tuning on a random subset of the task specific training data. For all tasks, we see that MTB based training is even more effective for low-resource cases, where there is a larger gap in performance between our BERTEM and BERTEM+MTB based classifiers. This further supports our argument that training by matching the blanks can significantly reduce the amount of human input required to create relation extractors, and populate a knowledge base. 6 Conclusion and Future Work In this paper we study the problem of producing useful relation representations directly from text. We describe a novel training setup, which we call matching the blanks, which relies solely on entity resolution annotations. When coupled with a new architecture for fine-tuning relation representations in BERT, our models achieves state-ofthe-art results on three relation extraction tasks, and outperforms human accuracy on few-shot relation matching. In addition, we show how the new model is particularly effective in low-resource regimes, and we argue that it could significantly reduce the amount of human effort required to create relation extractors. In future work, we plan to work on relation discovery by clustering relation statements that have similar representations according to 2903 BERTEM+MTB. This would take us some of the way toward our goal of truly general purpose relation identification and extraction. We will also study representations of relations and entities that can be used to store relation triples in a distributed knowledge base. This is inspired by recent work in knowledge base embedding (Bordes et al., 2013; Nickel et al., 2016). References Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pages 344–354. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, pages 2670– 2676, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Ivan Bilan and Benjamin Roth. 2018. Position-aware self-attention with relative positional encodings for slot filling. CoRR, abs/1807.03052. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD ’08, pages 1247–1250, New York, NY, USA. ACM. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2787–2795. Curran Associates, Inc. Claire Cardie. 1997. Empirical methods in information extraction. AI Magazine, 18(4):65–80. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1535–1545, Edinburgh, Scotland, UK. Association for Computational Linguistics. Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 427–434. Association for Computational Linguistics. 2904 Michael U Gutmann and Aapo Hyv¨arinen. 2012. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13(Feb):307–361. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803– 4809. Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146–162. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 94–99. Association for Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 541–550. Association for Computational Linguistics. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 22. Association for Computational Linguistics. Dekang Lin and Patrick Pantel. 2001. DIRT: Discovery of Inference Rules from Text. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’01), pages 323–328, New York, NY, USA. ACM Press. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011, Suntec, Singapore. Association for Computational Linguistics. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in neural information processing systems, pages 2265–2273. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 1955–1961. AAAI Press. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84. Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 885– 895, New Orleans, Louisiana. Association for Computational Linguistics. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Lisbon, Portugal. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Patrick Verga and Andrew McCallum. 2016. Row-less universal schema. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction, pages 63–68, San Diego, CA. Association for Computational Linguistics. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1298–1307. Association for Computational Linguistics. 2905 Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335–2344. Dublin City University and Association for Computational Linguistics. Dongxu Zhang and Dong Wang. 2015. Relation classification via recurrent neural network. CoRR, abs/1508.01006. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35–45.
2019
279
Augmenting Neural Networks with First-order Logic Tao Li University of Utah [email protected] Vivek Srikumar University of Utah [email protected] Abstract Today, the dominant paradigm for training neural networks involves minimizing task loss on a large dataset. Using world knowledge to inform a model, and yet retain the ability to perform end-to-end training remains an open question. In this paper, we present a novel framework for introducing declarative knowledge to neural network architectures in order to guide training and prediction. Our framework systematically compiles logical statements into computation graphs that augment a neural network without extra learnable parameters or manual redesign. We evaluate our modeling strategy on three tasks: machine comprehension, natural language inference, and text chunking. Our experiments show that knowledge-augmented networks can strongly improve over baselines, especially in low-data regimes. 1 Introduction Neural models demonstrate remarkable predictive performance across a broad spectrum of NLP tasks: e.g., natural language inference (Parikh et al., 2016), machine comprehension (Seo et al., 2017), machine translation (Bahdanau et al., 2015), and summarization (Rush et al., 2015). These successes can be attributed to their ability to learn robust representations from data. However, such end-to-end training demands a large number of training examples; for example, training a typical network for machine translation may require millions of sentence pairs (e.g. Luong et al., 2015). The difficulties and expense of curating large amounts of annotated data are well understood and, consequently, massive datasets may not be available for new tasks, domains or languages. In this paper, we argue that we can combat the data hungriness of neural networks by taking advantage of domain knowledge expressed as Gaius Julius Caesar (July 100 BC – 15 March 44 BC), Roman general, statesman, Consul and notable author of Latin prose, played a critical role in the events that led to the demise of the Roman Republic and the rise of the Roman Empire through his various military campaigns. Paragraph: Question: Which Roman general is known for writing prose? Figure 1: An example of reading comprehension that illustrates alignments/attention. In this paper, we consider the problem of incorporating external knowledge about such alignments into training neural networks. first-order logic. As an example, consider the task of reading comprehension, where the goal is to answer a question based on a paragraph of text (Fig. 1). Attention-driven models such as BiDAF (Seo et al., 2017) learn to align words in the question with words in the text as an intermediate step towards identifying the answer. While alignments (e.g. author to writing) can be learned from data, we argue that models can reduce their data dependence if they were guided by easily stated rules such as: Prefer aligning phrases that are marked as similar according to an external resource, e.g., ConceptNet (Liu and Singh, 2004). If such declaratively stated rules can be incorporated into training neural networks, then they can provide the inductive bias that can reduce data dependence for training. That general neural networks can represent such Boolean functions is known and has been studied both from the theoretical and empirical perspectives (e.g. Maass et al., 1994; Anthony, 2003; Pan and Srikumar, 2016). Recently, Hu et al. (2016) exploit this property to train a neural network to mimic a teacher network that uses structured rules. In this paper, we seek to directly incorporate such structured knowledge into a neural network architecture without substantial changes to the training methods. We focus on three questions: 1. Can we integrate declarative rules with endto-end neural network training? 2. Can such rules help ease the need for data? 3. How does incorporating domain expertise compare against large training resources powered by pre-trained representations? The first question poses the key technical challenge we address in this paper. On one hand, we wish to guide training and prediction with neural networks using logic, which is non-differentiable. On the other hand, we seek to retain the advantages of gradient-based learning without having to redesign the training scheme. To this end, we propose a framework that allows us to systematically augment an existing network architecture using constraints about its nodes by deterministically converting rules into differentiable computation graphs. To allow for the possibility of such rules being incorrect, our framework is designed to admit soft constraints from the ground up. Our framework is compatible with off-the-shelf neural networks without extensive redesign or any additional trainable parameters. To address the second and the third questions, we empirically evaluate our framework on three tasks: machine comprehension, natural language inference, and text chunking. In each case, we use a general off-the-shelf model for the task, and study the impact of simple logical constraints on observed neurons (e.g., attention) for different data sizes. We show that our framework can successfully improve an existing neural design, especially when the number of training examples is limited. In summary, our contributions are: 1. We introduce a new framework for incorporating first-order logic rules into neural network design in order to guide both training and prediction. 2. We evaluate our approach on three different NLP tasks: machine comprehension, textual entailment, and text chunking. We show that augmented models lead to large performance gains in the low training data regimes.1 1The code used for our experiments is archived here: https://github.com/utahnlp/layer_augmentation 2 Problem Setup In this section, we will introduce the notation and assumptions that form the basis of our formalism for constraining neural networks. Neural networks are directed acyclic computation graphs G = (V, E), consisting of nodes (i.e. neurons) V and weighted directed edges E that represent information flow. Although not all neurons have explicitly grounded meanings, some nodes indeed can be endowed with semantics tied to the task. Node semantics may be assigned during model design (e.g. attention), or incidentally discovered in post hoc analysis (e.g., Le et al., 2012; Radford et al., 2017, and others). In either case, our goal is to augment a neural network with such named neurons using declarative rules. The use of logic to represent domain knowledge has a rich history in AI (e.g. Russell and Norvig, 2016). In this work, to capture such knowledge, we will primarily focus on conditional statements of the form L →R, where the expression L is the antecedent (or the left-hand side) that can be conjunctions or disjunctions of literals, and R is the consequent (or the right-hand side) that consists of a single literal. Note that such rules include Horn clauses and their generalizations, which are well studied in the knowledge representation and logic programming communities (e.g. Chandra and Harel, 1985). Integrating rules with neural networks presents three difficulties. First, we need a mapping between the predicates in the rules and nodes in the computation graph. Second, logic is not differentiable; we need an encoding of logic that admits training using gradient based methods. Finally, computation graphs are acyclic, but user-defined rules may introduce cyclic dependencies between the nodes. Let us look at these issues in order. As mentioned before, we will assume named neurons are given. And by associating predicates with such nodes that are endowed with symbolic meaning, we can introduce domain knowledge about a problem in terms of these predicates. In the rest of the paper, we will use lower cased letters (e.g., ai, bj) to denote nodes in a computation graph, and upper cased letters (e.g., Ai, Bj) for predicates associated with them. To deal with the non-differentiablity of logic, we will treat the post-activation value of a named neuron as the degree to which the associated predicate is true. In §3, we will look at methods a1 a2 a3 b1 b2 Many layers Figure 2: An example computation graph. The statement A1 ∧B1 →A2 ∧B2 is cyclic with respect to the graph. On the other hand, the statement A1 ∧A2 → B1 ∧B2 is acyclic. for compiling conditional statements into differentiable statements that augment a given network. Cyclicity of Constraints Since we will augment computation graphs with compiled conditional forms, we should be careful to avoid creating cycles. To formalize this, let us define cyclicity of conditional statements with respect to a neural network. Given two nodes a and b in a computation graph, we say that the node a is upstream of node b if there is a directed path from a to b in the graph. Definition 1 (Cyclic and Acyclic Implications). Let G be a computation graph. An implicative statement L →R is cyclic with respect to G if, for any literal Ri ∈R, the node ri associated with it is upstream of the node lj associated with some literal Lj ∈L. An implicative statement is acyclic if it is not cyclic. Fig. 2 and its caption gives examples of cyclic and acyclic implications. A cyclic statement sometimes can be converted to an equivalent acyclic statement by constructing its contrapositive. For example, the constraint B1 →A1 is equivalent to ¬A1 →¬B1. While the former is cyclic, the later is acyclic. Generally, we can assume that we have acyclic implications.2 3 A Framework for Augmenting Neural Networks with Constraints To create constraint-aware neural networks, we will extend the computation graph of an existing network with additional edges defined by constraints. In §3.1, we will focus on the case where the antecedent is conjunctive/disjunctive and the consequent is a single literal. In §3.2, we will cover more general antecedents. 2As we will see in §3.3, the contrapositive does not always help because we may end up with a complex right hand side that we can not yet compile into the computation graph. 3.1 Constraints Beget Distance Functions Given a computation graph, suppose we have a acyclic conditional statement: Z →Y , where Z is a conjunction or a disjunction of literals and Y is a single literal. We define the neuron associated with Y to be y = g (Wx), where g denotes an activation function, W are network parameters, x is the immediate input to y. Further, let the vector z represent the neurons associated with the predicates in Z. While the nodes z need to be named neurons, the immediate input x need not necessarily have symbolic meaning. Constrained Neural Layers Our goal is to augment the computation of y so that whenever Z is true, the pre-activated value of y increases if the literal Y is not negated (and decreases if it is). To do so, we define a constrained neural layer as y = g (Wx + ρd (z)) . (1) Here, we will refer to the function d as the distance function that captures, in a differentiable way, whether the antecedent of the implication holds. The importance of the entire constraint is decided by a real-valued hyper-parameter ρ ≥0. The definition of the constrained neural layer says that, by compiling an implicative statement into a distance function, we can regulate the preactivation scores of the downstream neurons based on the states of upstream ones. Designing the distance function The key consideration in the compilation step is the choice of an appropriate distance function for logical statements. The ideal distance function we seek is the indicator for the statement Z: dideal(z) = ( 1, if Z holds, 0, otherwise. However, since the function dideal is not differentiable, we need smooth surrogates. In the rest of this paper, we will define and use distance functions that are inspired by probabilistic soft logic (c.f. Klement et al., 2013) and its use of the Łukasiewicz T-norm and T-conorm to define a soft version of conjunctions and disjunctions.3 Table 1 summarizes distance functions corresponding to conjunctions and disjunctions. In all 3The definitions of the distance functions here as surrogates for the non-differentiable dideal is reminiscent of the use of hinge loss as a surrogate for the zero-one loss. In both cases, other surrogates are possible. Antecedent Distance d(z) V i Zi max(0, P i zi −|Z| + 1) W i Zi min(1, P i zi) ¬ W i Zi max(0, 1 −P i zi) ¬ V i Zi min(1, N −P i zi) Table 1: Distance functions designed using the Łukasiewicz T-norm. Here, |Z| is the number of antecedent literals. zi’s are upstream neurons associated with literals Zi’s. cases, recall that the zi’s are the states of neurons and are assumed to be in the range [0, 1]. Examining the table, we see that with a conjunctive antecedent (first row), the distance becomes zero if even one of the conjuncts is false. For a disjunctive antecedent (second row), the distance becomes zero only when all the disjuncts are false; otherwise, it increases as the disjuncts become more likely to be true. Negating Predicates Both the antecedent (the Z’s) and the consequent (Y ) could contain negated predicates. We will consider these separately. For any negated antecedent predicate, we modify the distance function by substituting the corresponding zi with 1 −zi in Table 1. The last two rows of the table list out two special cases, where the entire antecedents are negated, and can be derived from the first two rows. To negate consequent Y , we need to reduce the pre-activation score of neuron y. To achieve this, we can simply negate the entire distance function. Scaling factor ρ In Eq. 1, the distance function serves to promote or inhibit the value of downstream neuron. The extent is controlled by the scaling factor ρ. For instance, with ρ = +∞, the pre-activation score of the downstream neuron is dominated by the distance function. In this case, we have a hard constraint. In contrast, with a small ρ, the output state depends on both the Wx and the distance function. In this case, the soft constraint serves more as a suggestion. Ultimately, the network parameters might overrule the constraint. We will see an example in §4 where noisy constraint prefers small ρ. 3.2 General Boolean Antecedents So far, we exclusively focused on conditional statements with either conjunctive or disjunctive antecedents. In this section, we will consider general antecedents. As an illustrative example, suppose we have an antecedent (¬A ∨B) ∧(C ∨D). By introducing auxiliary variables, we can convert it into the conjunctive form P ∧Q, where (¬A ∨B) ↔P and (C ∨D) ↔Q. To perform such operation, we need to: (1) introduce auxiliary neurons associated with the auxiliary predicates P and Q, and, (2) define these neurons to be exclusively determined by the biconditional constraint. To be consistent in terminology, when considering biconditional statement (¬A ∨B) ↔P, we will call the auxiliary literal P the consequent, and the original literals A and B the antecedents. Because the implication is bidirectional in biconditional statement, it violates our acyclicity requirement in §3.1. However, since the auxiliary neuron state does not depend on any other nodes, we can still create an acyclic sub-graph by defining the new node to be the distance function itself. Constrained Auxiliary Layers With a biconditional statement Z ↔Y , where Y is an auxiliary literal, we define a constrained auxiliary layer as y = d (z) (2) where d is the distance function for the statement, z are upstream neurons associated with Z, y is the downstream neuron associated with Y . Note that, compared to Eq. 1, we do not need activation function since the distance, which is in [0, 1], can be interpreted as producing normalized scores. Note that this construction only applies to auxiliary predicates in biconditional statements. The advantage of this layer definition is that we can use the same distance functions as before (i.e., Table 1). Furthermore, the same design considerations in §3.1 still apply here, including how to negate the left and right hand sides. Constructing augmented networks To complete the modeling framework, we summarize the workflow needed to construct an augmented neural network given a conditional statement and a computation graph: (1) Convert the antecedent into a conjunctive or a disjunctive normal form if necessary. (2) Convert the conjunctive/disjunctive antecedent into distance functions using Table 1 (with appropriate corrections for negations). (3) Use the distance functions to construct constrained layers and/or auxiliary layers to augment the computation graph by replacing the original layer with constrained one. (4) Finally, use the augmented network for end-to-end training and inference. We will see complete examples in §4. 3.3 Discussion Not only does our design not add any more trainable parameters to the existing network, it also admits efficient implementation with modern neural network libraries. When posing multiple constraints on the same downstream neuron, there could be combinatorial conflicts. In this case, our framework relies on the base network to handle the consistency issue. In practice, we found that summing the constrained pre-activation scores for a neuron is a good heuristic (as we will see in §4.3). For a conjunctive consequent, we can decompose it into multiple individual constraints. That is equivalent to constraining downstream nodes independently. Handling more complex consequents is a direction of future research. 4 Experiments In this section, we will answer the research questions raised in §1 by focusing on the effectiveness of our augmentation framework. Specifically, we will explore three types of constraints by augmenting: 1) intermediate decisions (i.e. attentions); 2) output decisions constrained by intermediate states; 3) output decisions constrained using label dependencies. To this end, we instantiate our framework on three tasks: machine comprehension, natural language inference, and text chunking. Across all experiments, our goal is to study the modeling flexibility of our framework and its ability to improve performance, especially with decreasing amounts of training data. To study low data regimes, our augmented networks are trained using varying amounts of training data to see how performances vary from baselines. For detailed model setup, please refer to the appendices. 4.1 Machine Comprehension Attention is a widely used intermediate state in several recent neural models. To explore the augmentation over such neurons, we focus on attention-based machine comprehension models on SQuAD (v1.1) dataset (Rajpurkar et al., 2016). We seek to use word relatedness from external resources (i.e., ConceptNet) to guide alignments, and thus to improve model performance. Model We base our framework on two models: BiDAF (Seo et al., 2017) and its ELMoaugmented variant (Peters et al., 2018). Here, we provide an abstraction of the two models which our framework will operate on: p, q = encoder(p), encoder(q) (3) ←−a , −→a = σ(layers(p, q)) (4) y, z = σ(layers(p, q, ←−a , −→a )) (5) where p and q are the paragraph and query respectively, σ refers to the softmax activation, ←−a and −→a are the bidirectional attentions from q to p and vice versa, y and z are the probabilities of answer boundaries. All other aspects are abstracted as encoder and layers. Augmentation By construction of the attention neurons, we expect that related words should be aligned. In a knowledge-driven approach, we can use ConceptNet to guide the attention values in the model in Eq. 4. We consider two rules to illustrate the flexibility of our framework. Both statements are in firstorder logic that are dynamically grounded to the computation graph for a particular paragraph and query. First, we define the following predicates: Ki,j word pi is related to word qj in ConceptNet via edges {Synonym, DistinctFrom, IsA, Related}. ←− A i,j unconstrained model decision that word qj best matches to word pi. ←− A ′ i,j constrained model decision for the above alignment. Using these predicates, we will study the impact of the following two rules, defined over a set C of content words in p and q: R1: ∀i, j ∈C, Ki,j →←− A ′ i,j. R2: ∀i, j ∈C, Ki,j ∧←− A i,j →←− A ′ i,j. The rule R1 says that two words should be aligned if they are related. Interestingly, compiling this statement using the distance functions in Table 1 is essentially the same as adding word relatedness as a static feature. The rule R2 is more conservative as it also depends on the unconstrained %Train BiDAF +R1 +R2 +ELMo +ELMo,R1 10% 57.5 61.5 60.7 71.8 73.0 20% 65.7 67.2 66.6 76.9 77.7 40% 70.6 72.6 71.9 80.3 80.9 100% 75.7 77.4 77.0 83.9 84.1 Table 2: Impact of constraints on BiDAF. Each score represents the average span F1 on our test set (i.e. official dev set) among 3 random runs. Constrained models and ELMo models are built on top of BiDAF. We set ρ = 2 for both R1 and R2 across all percentages. model decision. In both cases, since Ki,j does not map to a node in the network, we have to create a new node ki,j whose value is determined using ConceptNet, as illustrated in Fig. 3. Many Many layers layers a1,1 am,n .... .... p1 pm q1 qn .... y1 ym .... z1 zm s1,1 sm,n a1,1 am,n s1,1 sm,n s’1,1 s’m,n (a) (b) .... .... .... softmax a’1,1 a’m,n .... softmax softmax distance k1,1 km,n .... Figure 3: (a) The computation graph of BiDAF where attention directions are obmitted. (b) The augmented graph on attention layer using R2. Bold circles are extra neurons introduced. Constrained attentions and scores are a′ and s′ respectively. In the augmented model, graph (b) replaces the shaded part in (a). Can our framework use rules over named neurons to improve model performance? The answer is yes. We experiment with rules R1 and R2 on incrementally larger training data. Performances are reported in Table 2 with comparison with baselines. We see that our framework can indeed use logic to inform model learning and prediction without any extra trainable parameters needed. The improvement is particularly strong with small training sets. With more data, neural models are less reliant on external information. As a result, the improvement with larger datasets is smaller. How does it compare to pretrained encoders? Pretrained encoders (e.g. ELMo and BERT (Devlin et al., 2018)) improve neural models with improved representations, while our framework augments the graph using first-order logic. It is important to study the interplay of these two orthogonal directions. We can see in Table 2, our augmented model consistently outperforms baseline even with the presence of ELMo embeddings. Does the conservative constraint R2 help? We explored two options to incorporate word relatedness; one is a straightforward constraint (i.e. R1), another is its conservative variant (i.e. R2). It is a design choice as to which to use. Clearly in Table 2, constraint R1 consistently outperforms its conservative alternative R2, even though R2 is better than baseline. In the next task, we will see an example where a conservative constraint performs better with large training data. 4.2 Natural Language Inference Unlike in the machine comprehension task, here we explore logic rules that bridge attention neurons and output neurons. We use the SNLI dataset (Bowman et al., 2015), and base our framework on a variant of the decomposable attention (DAtt, Parikh et al., 2016) model where we replace its projection encoder with bidirectional LSTM (namely L-DAtt). Model Again, we abstract the pipeline of LDAtt model, only focusing on layers which our framework works on. Given a premise p and a hypothesis h, we summarize the model as: p, h = encoder(p), encoder(h) (6) ←−a , −→a = σ(layers(p, h)) (7) y = σ(layers(p, h, ←−a , −→a )) (8) Here, σ is the softmax activation, ←−a and −→a are bidirectional attentions, y are probabilities for labels Entailment, Contradiction, and Neutral. Augmentation We will borrow the predicate notation defined in the machine comprehension task (§4.1), and ground them on premise and hypothesis words, e.g. Ki,j now denotes the relatedness between premise word pi and hypothesis word hj. In addition, we define the predicate Yl to indicate that the label is l. As in §4.1, we define two rules governing attention: N1: ∀i, j ∈C, Ki,j →A′ i,j. N2: ∀i, j ∈C, Ki,j ∧Ai,j →A′ i,j. where C is the set of content words. Note that the two constraints apply to both attention directions. Intuitively, if a hypothesis content word is not aligned, then the prediction should not be Entailment. To use this knowledge, we define the following rule: N3: Z1 ∧Z2 →¬Y ′ Entail, where ∃j ∈C, ¬  ∃i ∈C, ←− A ′ i,j  ↔Z1, ∃j ∈C, ¬  ∃i ∈C, −→ A ′ i,j  ↔Z2. where Z1 and Z2 are auxiliary predicates tied to the Y ′ Entail predicate. The details of N3 are illustrated in Fig. 4. a1,1 ai,j am,n .... .... .... p1 pm h1 hn (a) (b) se sc sn ye yc yn a’1,1 a’i,j a’m,n .... .... .... softmax Many s’e sc sn z1 y’e yc yn z2 se softmax Many layers distance distance Many layers layers Figure 4: (a) The computation graph of the L-DAtt model (attention directions obmitted). (b) The augmented graph on the Entail label using N3. Bold circles are extra neurons introduced. Unconstrained preactivation scores are s while s′ e is the constrained score on Entail. Intermediate neurons are z1 and z2. constrained attentions a′ are constructed using N1 or N2. In our augmented model, the graph (b) replaces the shaded part in (a). How does our framework perform with large training data? The SNLI dataset is a large dataset with over half-million examples. We train our models using incrementally larger percentages of data and report the average performance in Table 3. Similar to §4.1, we observe strong improvements from augmented models trained on small percentages (≤10%) of data. The straightforward constraint N1 performs strongly with ≤2% data while its conservative alternative N2 works better with a larger set. However, with full dataset, our augmented models perform only on par with baseline even with lowered scaling factor ρ. These observations suggest that if a large dataset is available, it may be better to believe the data, but with smaller datasets, constraints can provide useful inductive bias for the models. Are noisy constraints helpful? It is not always easy to state a constraint that all examples satisfy. Comparing N2 and N3, we see that N3 per%Train L-DAtt +N1 +N2 +N3 +N2,3 1% 61.2 64.9 63.9 62.5 64.3 2% 66.5 70.5 69.8 67.9 70.2 5% 73.4 76.2 76.6 74.0 76.4 10% 78.9 80.1 80.4 79.3 80.3 100% 87.1 86.9 87.1 87.0 86.9 Table 3: Impact of constraints on L-DAtt network. Each score represents the average accuracy on SNLI test set among 3 random runs. For both N1 and N2, we set ρ = (8, 8, 8, 8, 4) for the five different percentages. For the noisy constraint N3, ρ = (2, 2, 1, 1, 1). formed even worse than baseline, which suggests it contains noise. In fact, we found a significant amount of counter examples to N3 during preliminary analysis. Yet, even a noisy rule can improve model performance with ≤10% data. The same observation holds for N1, which suggests conservative constraints could be a way to deal with noise. Finally, by comparing N2 and N2,3, we find that the good constraint N2 can not just augment the network, but also amplify the noise in N3 when they are combined. This results in degrading performance in the N2,3 column starting from 5% of the data, much earlier than using N3 alone. 4.3 Text Chunking Attention layers are a modeling choice that do not always exist in all networks. To illustrate that our framework is not necessarily grounded to attention, we turn to an application where we use knowledge about the output space to constrain predictions. We focus on the sequence labeling task of text chunking using the CoNLL2000 dataset (Tjong Kim Sang and Buchholz, 2000). In such sequence tagging tasks, global inference is widely used, e.g., BiLSTM-CRF (Huang et al., 2015). Our framework, on the other hand, aims to promote local decisions. To explore the interplay of global model and local decision augmentation, we will combine CRF with our framework. Model Our baseline is a BiLSTM tagger: x = BiLSTM(x) (9) y = σ(linear(x)) (10) where x is the input sentence, σ is softmax, y are the output probabilities of BIO tags. Augmentation We define the following predicates for input and output neurons: %Train BiLSTM +CRF +C1:5 +CRF,C1:5 5% 87.2 86.6 88.9 88.6 10% 89.1 88.8 90.7 90.6 20% 90.9 90.8 92.1 92.1 40% 92.5 92.5 93.4 93.5 100% 94.1 94.4 94.8 95.0 Table 4: Impact of constraints on BiLSTM tagger. Each score represents the average accuracy on test set of 3 random runs. The columns of +CRF, +C1:5, and +CRF,C1:5 are on top of the BiLSTM baseline. For C1:4, ρ = 4 for all percentages. For C5, ρ = 16. Yt,l The unconstrained decision that tth word has label l. Y ′ t,l The constrained decision that tth word has label l. Nt The tth word is a noun. Then we can write rules for pairwise label dependency. For instance, if word t has B/I- tag for a certain label, word t+1 can not have an I- tag with a different label. C1: ∀t, Yt,B-VP →¬Y ′ t+1,I-NP C2: ∀t, Yt,I-NP →¬Y ′ t+1,I-VP C3: ∀t, Yt,I-VP →¬Y ′ t+1,I-NP C4: ∀t, Yt,B-PP →¬Y ′ t+1,I-VP Our second set of rules are also intuitive: A noun should not have non-NP label. C5: ∀t, Nt →V l∈{B-VP,I-VP,B-PP,I-PP} ¬Y ′ t,l While all above rules can be applied as hard constraints in the output space, our framework provides a differentiable way to inform the model during training and prediction. How does local augmentation compare with global inference? We report performances in Table 4. While a first-order Markov model (e.g., the BiLSTM-CRF) can learn pairwise constraints such as C1:4, we see that our framework can better inform the model. Interestingly, the CRF model performed even worse than the baseline with ≤40% data. This suggests that global inference relies on more training examples to learn its scoring function. In contrast, our constrained models performed strongly even with small training sets. And by combining these two orthogonal methods, our locally augmented CRF performed the best with full data. 5 Related Work and Discussion Artificial Neural Networks and Logic Our work is related to neural-symbolic learning (e.g. Besold et al., 2017) which seeks to integrate neural networks with symbolic knowledge. For example, Cingillioglu and Russo (2019) proposed neural models that multi-hop logical reasoning. KBANN (Towell et al., 1990) constructs artificial neural networks using connections expressed in propositional logic. Along these lines, França et al. (2014, CILP++) build neural networks from a rule set for relation extraction. Our distinction is that we use first-order logic to augment a given architecture instead of designing a new one. Also, our framework is related to Kimmig et al. (2012, PSL) which uses a smooth extension of standard Boolean logic. Hu et al. (2016) introduced an imitation learning framework where a specialized teacher-student network is used to distill rules into network parameters. This work could be seen as an instance of knowledge distillation (Hinton et al., 2015). Instead of such extensive changes to the learning procedure, our framework retains the original network design and augments existing interpretable layers. Regularization with Logic Several recent lines of research seek to guide training neural networks by integrating logical rules in the form of additional terms in the loss functions (e.g., Rocktäschel et al., 2015) that essentially promote constraints among output labels (e.g., Du et al., 2019; Mehta et al., 2018), promote agreement (Hsu et al., 2018) or reduce inconsistencies across predictions (Minervini and Riedel, 2018). Furthermore, Xu et al. (2018) proposed a general design of loss functions using symbolic knowledge about the outputs. Fischer et al. (2019) described a method for for deriving losses that are friendly to gradient-based learning algorithms. Wang and Poon (2018) proposed a framework for integrating indirect supervision expressed via probabilistic logic into neural networks. Learning with Structures Traditional structured prediction models (e.g. Smith, 2011) naturally admit constraints of the kind described in this paper. Indeed, our approach for using logic as a template-language is similar to Markov Logic Networks (Richardson and Domingos, 2006), where logical forms are compiled into Markov networks. Our formulation augments model scores with constraint penalties is reminiscent of the Constrained Conditional Model of Chang et al. (2012). Recently, we have seen some work that allows backpropagating through structures (e.g. Huang et al., 2015; Kim et al., 2017; Yogatama et al., 2017; Niculae et al., 2018; Peng et al., 2018, and the references within). Our framework differs from them in that structured inference is not mandantory here. We believe that there is room to study the interplay of these two approaches. Also related to our attention augmentation is using word relatedness as extra input feature to attention neurons (e.g. Chen et al., 2018). 6 Conclusions In this paper, we presented a framework for introducing constraints in the form of logical statements to neural networks. We demonstrated the process of converting first-order logic into differentiable components of networks without extra learnable parameters and extensive redesign. Our experiments were designed to explore the flexibility of our framework with different constraints in diverse tasks. As our experiments showed, our framework allows neural models to benefit from external knowledge during learning and prediction, especially when training data is limited. 7 Acknowledgements We thank members of the NLP group at the University of Utah for their valuable insights and suggestions; and reviewers for pointers to related works, corrections, and helpful comments. We also acknowledge the support of NSF SaTC1801446, and gifts from Google and NVIDIA. References Martin Anthony. 2003. Boolean functions and artificial neural networks. Boolean Functions. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations. Tarek R Besold, Artur d’Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe Kühnberger, Luis C Lamb, Daniel Lowd, Priscila Machado Vieira Lima, et al. 2017. Neural-symbolic learning and reasoning: A survey and interpretation. arXiv preprint arXiv:1711.03902. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Ashok K Chandra and David Harel. 1985. Horn clause queries and generalizations. The Journal of Logic Programming, 2. Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2012. Structured learning with constrained conditional models. Machine learning, 88. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Nuri Cingillioglu and Alessandra Russo. 2019. Deeplogic: End-to-end logical reasoning. AAAI 2019 Spring Symposium on Combining Machine Learning with Knowledge Engineering. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Xinya Du, Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wen tau Yih, Peter Clark, and Claire Cardie. 2019. Be consistent! improving procedural text comprehension using label consistency. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Marc Fischer, Mislav Balunovic, Dana DrachslerCohen, Timon Gehr, Ce Zhang, and Martin Vechev. 2019. Dl2: Training and querying neural networks with logic. In International Conference on Machine Learning. Manoel VM França, Gerson Zaverucha, and Artur S d’Avila Garcez. 2014. Fast relational learning using bottom clause propositionalization with artificial neural networks. Machine learning, 94. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. In Neural Information Processing Systems. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing deep neural networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. 2017. Structured attention networks. In International Conference on Learning Representations. Angelika Kimmig, Stephen Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short introduction to probabilistic soft logic. In Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications. Erich Peter Klement, Radko Mesiar, and Endre Pap. 2013. Triangular norms. Springer Science & Business Media. Quoc V Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S Corrado, Jeff Dean, and Andrew Y Ng. 2012. Building high-level features using large scale unsupervised learning. In International Conference on Machine Learning. Hugo Liu and Push Singh. 2004. ConceptNet – A Practical Commonsense Reasoning Tool-Kit. BT technology journal, 22. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Wolfgang Maass, Georg Schnitger, and Eduardo D Sontag. 1994. A comparison of the computational power of sigmoid and boolean threshold circuits. In Theoretical Advances in Neural Computation and Learning, pages 127–151. Springer. Sanket Vaibhav Mehta, Jay Yoon Lee, and Jaime Carbonell. 2018. Towards semi-supervised learning for deep semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Pasquale Minervini and Sebastian Riedel. 2018. Adversarially regularising neural nli models to integrate logical background knowledge. In Proceedings of the 22nd Conference on Computational Natural Language Learning. Vlad Niculae, André FT Martins, Mathieu Blondel, and Claire Cardie. 2018. SparseMAP: Differentiable sparse structured inference. In International Conference on Machine Learning. Xingyuan Pan and Vivek Srikumar. 2016. Expressiveness of rectifier networks. In International Conference on Machine Learning. Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS 2017 Autodiff Workshop. Hao Peng, Sam Thomson, and Noah A Smith. 2018. Backpropagating through Structured Argmax using a SPIGOT. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine learning, 62. Tim Rocktäschel, Sameer Singh, and Sebastian Riedel. 2015. Injecting logical background knowledge into embeddings for relation extraction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stuart J Russell and Peter Norvig. 2016. Artificial Intelligence: A Modern Approach. Pearson Education Limited. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. International Conference on Learning Representations. Noah A Smith. 2011. Linguistic structure prediction. Synthesis lectures on human language technologies, 4. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15. Erik F Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th Conference on Computational Natural Language Learning. Geoffrey G Towell, Jude W Shavlik, and Michiel O Noordewier. 1990. Refinement of approximate domain theories by knowledge-based neural networks. In Proceedings of the Eighth National Conference on Artificial Intelligence. Hai Wang and Hoifung Poon. 2018. Deep probabilistic logic: A unifying framework for indirect supervision. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van den Broeck. 2018. A semantic loss function for deep learning with symbolic knowledge. In International Conference on Machine Learning. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In International Conference on Machine Learning. A Appendices Here, we explain our experiment setup for the three tasks: machine comprehension, natural language inference, and text chunking. For each task, we describe the model setup, hyperparameters, and data splits. For all three tasks, we used Adam (Paszke et al., 2017) for training and use 300 dimensional GloVe (Pennington et al., 2014) vectors (trained on 840B tokens) as word embeddings. A.1 Machine Comprehension The SQuAD (v1.1) dataset consists of 87, 599 training instances and 10, 570 development examples. Firstly, for a specific percentage of training data, we sample from the original training set. Then we split the sampled set into 9/1 folds for training and development. The original development set is reserved for testing only. This is because that the official test set is hidden, and the number of models we need to evaluate is impractical for accessing official test set. In our implementation of the BiDAF model, we use a learning rate 0.001 to train the model for 20 epochs. Dropout (Srivastava et al., 2014) rate is 0.2. The hidden size of each direction of BiLSTM encoder is 100. For ELMo models, we train for 25 epochs with learning rate 0.0002. The rest hyperparameters are the same as in (Peters et al., 2018). Note that we did neither pre-tune nor posttune ELMo embeddings. The best model on the development split is selected for evaluation. No exponential moving average method is used. The scaling factor ρ’s are manually grid-searched in {1, 2, 4, 8, 16} without extensively tuning. A.2 Natural Language Inference We use Stanford Natural Language Inference (SNLI) dataset which has 549, 367 training, 9, 842 development, and 9, 824 test examples. For each of the percentages of training data, we sample the same proportion from the orginal development set for validation. To have reliable model selection, we limit the minimal number of sampled development examples to be 1000. The original test set is only for reporting. In our implimentation of the BiLSTM variant of the Decomposable Attention (DAtt) model, we adopt learning rate 0.0001 for 100 epochs of training. The dropout rate is 0.2. The best model on the development split is selected for evaluation. The scaling factor ρ’s are manually grid-searched in {0.5, 1, 2, 4, 8, 16} without extensively tuning. A.3 Text Chunking The CoNLL2000 dataset consists of 8, 936 examples for training and 2, 012 for testing. From the original training set, both of our training and development examples are sampled and split (by 9/1 folds). Performances are then reported on the original full test set. In our implementation, we set hidden size to 100 for each direction of BiLSTM encoder. Before the final linear layer, we add a dropout layer with probability 0.5 for regularization. Each model was trained for 100 epochs with learning rate 0.0001. The best model on the development split is selected for evaluation. The scaling factor ρ’s are manually grid-searched in {1, 2, 4, 8, 16, 32, 64} without extensively tuning.
2019
28
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2906–2919 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2906 Fine-Grained Temporal Relation Extraction Siddharth Vashishtha University of Rochester Benjamin Van Durme Johns Hopkins University Aaron Steven White University of Rochester Abstract We present a novel semantic framework for modeling temporal relations and event durations that maps pairs of events to real-valued scales. We use this framework to construct the largest temporal relations dataset to date, covering the entirety of the Universal Dependencies English Web Treebank. We use this dataset to train models for jointly predicting fine-grained temporal relations and event durations. We report strong results on our data and show the efficacy of a transfer-learning approach for predicting categorical relations. 1 Introduction Natural languages provide a myriad of formal and lexical devices for conveying the temporal structure of complex events—e.g. tense, aspect, auxiliaries, adverbials, coordinators, subordinators, etc. Yet, these devices are generally insufficient for determining the fine-grained temporal structure of such events. Consider the narrative in (1). (1) At 3pm, a boy broke his neighbor’s window. He was running away, when the neighbor rushed out to confront him. His parents were called but couldn’t arrive for two hours because they were still at work. Most native English speakers would have little difficulty drawing a timeline for these events, likely producing something like that in Figure 1. But how do we know that the breaking, the running away, the confrontation, and the calling were short, while the parents being at work was not? And why should the first four be in sequence, with the last containing the others? The answers to these questions likely involve a complex interplay between linguistic information, on the one hand, and common sense knowledge about events and their relationships, on the other run away arrive rush out confront call be at work break 5pm 3pm 4pm Figure 1: A typical timeline for the narrative in (1). (Minsky, 1975; Schank and Abelson, 1975; Lamport, 1978; Allen and Hayes, 1985; Hobbs et al., 1987; Hwang and Schubert, 1994). But it remains a question how best to capture this interaction. A promising line of attack lies in the task of temporal relation extraction. Prior work in this domain has approached this task as a classification problem, labeling pairs of eventreferring expressions—e.g. broke or be at work in (1)—and time-referring expressions—e.g. 3pm or two hours—with categorical temporal relations (Pustejovsky et al., 2003; Styler IV et al., 2014; Minard et al., 2016). The downside of this approach is that time-referring expressions must be relied upon to express duration information. But as (1) highlights, nearly all temporal duration information can be left implicit without hindering comprehension, meaning these approaches only explicitly encode duration information when that information is linguistically realized. In this paper, we develop a novel framework for temporal relation representation that puts event duration front and center. Like standard approaches using the TimeML standard, we draw inspiration from Allen’s (1983) seminal work on interval representations of time. But instead of annotating text for categorical temporal relations, we map events to their likely durations and event pairs directly to real-valued relative timelines. This change not only supports the goal of giving a more central role to event duration, it also allows us to better reason about the temporal structure of com2907 plex events as described by entire documents. We first discuss prior work on temporal relation extraction (§2) and then present our framework and data collection methodology (§3). The resulting dataset—Universal Decompositional Semantics Time (UDS-T)—is the largest temporal relation dataset to date, covering all of the Universal Dependencies (Silveira et al., 2014; De Marneffe et al., 2014; Nivre et al., 2015) English Web Treebank (Bies et al., 2012). We use this dataset to train a variety of neural models (§4) to jointly predict event durations and fine-grained (real-valued) temporal relations (§5), yielding not only strong results on our dataset, but also competitive performance on TimeML-based datasets (§6).1 2 Background We review prior work on temporal relations frameworks and temporal relation extraction systems. Corpora Most large temporal relation datasets use the TimeML standard (Pustejovsky et al., 2003; Styler IV et al., 2014; Minard et al., 2016). TimeBank is one of the earliest large corpora built using this standard, aimed at capturing ‘salient’ temporal relations between events (Pustejovsky et al., 2003). The TempEval competitions build on TimeBank by covering relations between all the events and times in a sentence. Inter-sentential relations, which are necessary for document-level reasoning, have not been a focus of the TempEval tasks, though at least one sub-task does address them (Verhagen et al., 2007, 2010; UzZaman et al., 2013, and see Chambers et al. 2014). Part of this likely has to do with the sparsity inherent in the TempEval event-graphs. This sparsity has been addressed with corpora such as the TimeBank-Dense, where annotators label all local-edges irrespective of ambiguity (Cassidy et al., 2014). TimeBank-Dense does not capture the complete graph over event and time relations, instead attempting to achieve completeness by capturing all relations both within a sentence and between neighboring sentences. We take inspiration from this work for our own framework. This line of work has been further improved on by frameworks such as Richer Event Description (RED), which uses a multi-stage annotation pipeline where various event-event phenomena, including temporal relations and sub1Data and code are available at http://decomp.io/. event relations are annotated together in the same datasets (O’Gorman et al., 2016). Similarly, Hong et al. (2016) build a cross-document event corpus which covers fine-grained event-event relations and roles with more number of event types and sub-types (see also Fokkens et al., 2013). Models Early systems for temporal relation extraction use hand-tagged features modeled with multinomial logistic regression and support vector machines (Mani et al., 2006; Bethard, 2013; Lin et al., 2015). Other approaches use combined rulebased and learning-based approaches (D’Souza and Ng, 2013) and sieve-based architectures— e.g. CAEVO (Chambers et al., 2014) and CATENA (Mirza and Tonelli, 2016). Recently, Ning et al. (2017) use a structured learning approach and show significant improvements on both TempEval-3 (UzZaman et al., 2013) and TimeBank-Dense (Cassidy et al., 2014). Ning et al. (2018) show further improvements on TimeBank-Dense by jointly modeling causal and temporal relations using Constrained Conditional Models and formulating the problem as an Interger Linear Programming problem. Neural network-based approaches have used both recurrent (Tourille et al., 2017; Cheng and Miyao, 2017; Leeuwenberg and Moens, 2018) and convolutional architectures (Dligach et al., 2017). Such models have furthermore been used to construct document timelines from a set of predicted temporal relations (Leeuwenberg and Moens, 2018). Such use of pairwise annotations can result in inconsistent temporal graphs, and efforts have been made to avert this issue by employing temporal reasoning (Chambers and Jurafsky, 2008; Yoshikawa et al., 2009; Denis and Muller, 2011; Do et al., 2012; Laokulrat et al., 2016; Ning et al., 2017; Leeuwenberg and Moens, 2017). Other work has aimed at modeling event durations from text (Pan et al., 2007; Gusev et al., 2011; Williams and Katz, 2012), though this work does not tie duration to temporal relations (see also Filatova and Hovy, 2001). Our approach combines duration and temporal relation information within a unified framework, discussed below. 3 Data Collection We collect the Universal Decompositional Semantics Time (UDS-T) dataset, which is annotated on top of the Universal Dependencies (Silveira et al., 2014; De Marneffe et al., 2014; Nivre et al., 2015) 2908 Dataset #Events #Event-Event Relations TimeBank 7,935 3,481 TempEval 2010 5,688 3,308 TempEval 2013 11,145 5,272 TimeBank-Dense 1,729 8,130 Hong et al. (2016) 863 25,610 UDS-T 32,302 70,368 Table 1: Number of total events, and event-event temporal relations captured in various corpora English Web Treebank (Bies et al., 2012) (UDEWT). The main advantages of UD-EWT over other similar corpora are: (i) it covers text from a variety of genres; (ii) it contains gold standard Universal Dependency parses; and (iii) it is compatible with various other semantic annotations which use the same predicate extraction standard (White et al., 2016; Zhang et al., 2017; Rudinger et al., 2018; Govindarajan et al., 2019). Table 1 compares the size of UDS-T against other temporal relations datasets. Protocol design Annotators are given two contiguous sentences from a document with two highlighted event-referring expressions (predicates). They are then asked (i) to provide relative timelines on a bounded scale for the pair of events referred to by the highlighted predicates; and (ii) to give the likely duration of the event referred to by the predicate from the following list: instantaneous, seconds, minutes, hours, days, weeks, months, years, decades, centuries, forever. In addition, annotators were asked to give a confidence ratings for their relation annotation and each of their two duration annotation on the same fivepoint scale - not at all confident (0), not very confident (1), somewhat confident (2), very confident (3), totally confident (4). An example of the annotation instrument is shown in Figure 2. Henceforth, we refer to the situation referred to by the predicate that comes first in linear order (feed in Figure 2) as e1 and the situation referred to by the predicate that comes second in linear order (sick in Figure 2) as e2. Annotators We recruited 765 annotators from Amazon Mechanical Turk to annotate predicate pairs in groups of five. Each predicate pair contained in the UD-EWT train set was annotated by a single annotator, and each in the UD-EWT development and test sets was annotated by three. Predicate extraction We extract predicates from UD-EWT using PredPatt (White et al., 2016; Figure 2: An annotated example from our protocol Zhang et al., 2017), which identifies 33,935 predicates from 16,622 sentences. We concatenate all pairs of adjacent sentences in the documents contained in UD-EWT, allowing us to capture intersentential temporal relations. Considering all possible pairs of predicates in adjacent sentences is infeasible, so we use a heuristic to capture the most interesting pairs. (See Appendix A for details.) 1 A B C 0 0.12 0.48 0.48 0.66 e1 e2 0.02 0.38 0.38 0.56 e1 e2 0.30 0.48 0.48 0.57 e1 e2 0 1 0 0.66 0.66 1 e1 e2 Figure 3: Normalization of slider values Normalization We normalize the slider responses for each event pair by subtracting the minimum slider value from all values, then dividing all such shifted values by the maximum value (after shifting). This ensures that the earliest beginning point for every event pair lies at 0 and that the right-most end-point lies at 1 while preserving the ratio between the durations implied by the sliders. Figure 3 illustrates this procedure for three hypothetical annotators annotating the same two events e1 and e2. Assuming that the duration classes for e1 or e2 do not differ across annotators, the relative chronology of the events is the same in each case. This preservation of relative chronology, over absolute slider position, is important because, for the purposes of determining temporal relation, the absolute positions that annotators give are meaningless, and we do not want our models to be forced to fit to such irrelevant information. Inter-annotator agreement We measure interannotator agreement (IAA) for the temporal relation sliders by calculating the rank (Spearman) 2909 0.00 0.05 0.10 0.15 0.20 0.25 instant seconds minutes hours days weeks months years decades centuries forever Relative Frequency Split dev train Figure 4: Distribution of event durations. correlation between the normalized slider positions for each pair of annotators that annotated the same group of five predicate pairs in the development set.2 The development set is annotated by 724 annotators. Rank correlation is a useful measure because it tells us how much different annotators agree of the relative position of each slider. The average rank correlation between annotators was 0.665 (95% CI=[0.661, 0.669]). For the duration responses, we compute the absolute difference in duration rank between the duration responses for each pair of annotators that annotated the same group of five predicate pairs in the development set. On average, annotators disagree by 2.24 scale points (95% CI=[2.21, 2.25]), though there is heavy positive skew (γ1 = 1.16, 95% CI=[1.15, 1.18])—evidenced by the fact that the modal rank difference is 1 (25.3% of the response pairs), with rank difference 0 as the next most likely (24.6%) and rank difference 2 as a distant third (15.4%). Summary statistics Figure 4 shows the distribution of duration responses in the training and development sets. There is a relatively high density of events lasting minutes, with a relatively even distribution across durations of years or less and few events lasting decades or more. The raw slider positions themselves are somewhat difficult to directly interpret. To improve interpretability, we rotate the slider position space to construct four new dimensions: (i) PRIORITY, which is positive when e1 starts and/or ends earlier than e2 and negative otherwise; (ii) CONTAINMENT,which is most positive when e1 contains more of e2; (iii) EQUALITY, which is largest when 2Our protocol design also allows us to detect some bad annotations internal to the annotation itself, as opposed to comparing one annotator’s annotation of an item to another. See Appendix B for further details on our deployment of such annotation-internal validation techniques. Figure 5: Distribution of event relations. both e1 and e2 have the same temporal extents and smallest when they are most unequal; and (iv) SHIFT, which moves the events forward or backward in time. We construct these dimensions by solving for R in R 2 664 −1 −1 1 1 −1 1 1 −1 −1 1 −1 1 1 1 1 1 3 775 = 2S −1 where S 2 [0, 1]N⇥4 contains the slider positions for our N datapoints in the following order: beg(e1), end(e1), beg(e2), end(e2). Figure 5 shows the embedding of the event pairs on the first three of these dimensions of R. The triangular pattern near the top and bottom of the plot arises because strict priority—i.e. extreme positivity or negativity on the y-axis—precludes any temporal overlap between the two events, and as we move toward the center of the plot, different priority relations mix with different overlap relations— e.g. the upper-middle left corresponds to event pairs where most of e1 comes toward the beginning of e2, while the upper middle right of the plot corresponds to event pairs where most of e2 comes toward the end of e1. 4 Model For each pair of events referred to in a sentence, we aim to jointly predict the relative timelines of those events as well as their durations. We then use a separate model to induce document timelines from the relative timelines. Relative timelines The relative timeline model consists of three components: an event model, a 2910 What to feed my been sick dog …. …. for What to feed my been sick dog …. …. for What to feed my been sick dog …. …. for ELMo Attention Attention Attention Attention Attention gpred(i) gpred(j) MLPrel MLPdur MLPdur gdur(i) gdur(j) grel(i,j) hours days Tuner Figure 6: Network diagram for model. Dashed arrows are only included in some models. duration model, and a relation model. These components use multiple layers of dot product attention (Luong et al., 2015) on top of an embedding H 2 RN⇥D for a sentence s = [w1, . . . , wN] tuned on the three M-dimensional contextual embeddings produced by ELMo (Peters et al., 2018) for that sentence, concatenated together. H = tanh (ELMo(s)WTUNE + bTUNE) where D is the dimension for the tuned embeddings, WTUNE 2 R3M⇥D, and bTUNE 2 RN⇥D. Event model We define the model’s representation for the event referred to by predicate k as gpredk 2 RD, where D is the embedding size. We build this representation using a variant of dotproduct attention, based on the predicate root. aSPAN predk = tanh ' ASPAN PREDhROOT(predk) + bSPAN PRED ( ↵predk = softmax ' HSPAN(predk)aSPAN predk ( gpredk = [hROOT(predk); ↵predkHSPAN(predk)] where ASPAN PRED 2 RD⇥D, bSPAN PRED 2 RD; hROOT(predk) is the hidden representation of the kth predicate’s root; and HSPAN(predk) is obtained by stacking the hidden representations of the entire predicate. As an example, the predicate been sick for now in Figure 2 has sick as its root, and thus we would take the hidden representation for sick as hROOT(predk). Similarly, HSPAN(predk) would be equal to taking the hidden-state representations of been sick for now and stacking them together. Then, if the model learns that tense information is important, it may weight been using attention. Duration model The temporal duration representation gdurk for the event referred to by the kth predicate is defined similarly to the event representation, but instead of stacking the predicate’s span, we stack the hidden representations of the entire sentence H. aSENT durk = tanh ' ASENT DUR gpredk + bSENT DUR ( ↵durk = softmax(HaSENT durk ) gdurk = [gpredk; ↵durkH] where ASENT DUR 2 RD⇥size(gpredk) and bSENT DUR 2 RD. We consider two models of the categorical durations: a softmax model and a binomial model. The main difference is that the binomial model enforces that the probabilities pdurk over the 11 duration values be concave in the duration rank, whereas the softmax model has no such constraint. We employ a cross-entropy loss for both models. Ldur(dk; p) = −log pdk In the softmax model, we pass the duration representation gdurk for predicate k through a multilayer perceptron (MLP) with a single hidden layer of ReLU activations, to yield probabilities pdurk over the 11 durations. vdurk = ReLU(W(1) DURgdurk + b(1) DUR) p = softmax(W(2) DURvdurk + b(2) DUR) In the binomial distribution model, we again pass the duration representation through a MLP with a single hidden layer of ReLU activations, but in this case, we yield only a single value ⇡durk. With vdurk as defined above: ⇡= σ ⇣ w(2) DURvdurk + b(2) DUR ⌘ pc = ✓n c ◆ ⇡n(1 −⇡)(n−c) where c 2 {0, 1, 2, ..., 10} represents the ranked durations – instant (0), seconds (1), minutes (2), ..., centuries (9), forever (10) – and n is the maximum class rank (10). Relation model To represent the temporal relation representation between the events referred to by the ith and jth predicate, we again use a similar attention mechanism. 2911 aSENT relij = tanh ' ASENT REL [gpredi; gpredj] + bSENT REL ( ↵relij = softmax ⇣ HaSENT relij ⌘ grelij = [gpredi; gpredj; ↵relijH] where ASENT REL 2 RD⇥2size(gpredk) and bSENT REL 2 RD. The main idea behind our temporal model is to map events and states directly to a timeline, which we represent via a reference interval [0, 1]. For situation k, we aim to predict the beginning point bk and end-point ek ≥bk of k. We predict these values by passing grelij through an MLP with one hidden layer of ReLU activations and four real-valued outputs [ˆβi, ˆδi, ˆβj, ˆδj], representing the estimated relative beginning points (ˆβi, ˆβj) and durations (ˆδi, ˆδj) for events i and j. We then calculate the predicted slider values ˆsij = [ˆbi, ˆei,ˆbj, ˆej] h ˆbk, ˆek i = h σ ⇣ ˆβk ⌘ , σ ⇣ ˆβk + ///ˆδk /// ⌘i The predicted values ˆsij are then normalized in the same fashion as the true slider values prior to being entered into the loss. We constrain this normalized ˆsij using four L1 losses. Lrel(sij;ˆsij) = ///(bi −bj) −(ˆbi −ˆbj) /// + ///(ei −bj) −(ˆei −ˆbj) /// + ///(ej −bi) −(ˆej −ˆbi) /// + |(ei −ej) −(ˆei −ˆej)| The final loss function is then L = Ldur + 2Lrel. Duration-relation connections We also experiment with four architectures wherein the duration and relation models are connected to each other in the Dur ! Rel or Dur Rel directions. In the Dur ! Rel architectures, we modify grelij in two ways: (i) additionally concatenating the ith and jth predicate’s duration probabilities from the binomial distribution model, and (ii) not using the relation representation model at all. grelij = [gpredi; gpredj; ↵relijH; pi; pj] grelij = [pi; pj] In the Dur Rel architectures, we use two modifications: (i) we modify gdurk by concatenating the ˆbk and ˆek from the relation model, and (ii) we do not use the duration representation model at all, instead use the predicted relative duration ˆek −ˆbk obtained from the relation model, passing it through the binomial distribution model. gdurk = [gpredk; ↵durkH;ˆbk; ˆek] ⇡durk = ˆek −ˆbk Document timelines We induce the hidden document timelines for the documents in the UDST development set using relative timelines from (i) actual pairwise slider annotations; or (ii) slider values predicted by the best performing model on UDS-T development set. To do this, we assume a hidden timeline T 2 Rnd⇥2 + , where nd is the total number of predicates in that document, the two dimensions represent the beginning point and the duration of the predicates. We connect these latent timelines to the relative timelines, by anchoring the beginning points of all predicates such that there is always a predicate with 0 as the beginning point in a document and defining auxiliar variables ⌧ij and ˆsij for each events i and j. ⌧ij = [ti1, ti1 + ti2, tj1, tj1 + tj2] ˆsij = ⌧ij −min(⌧ij) max(⌧ij −min(⌧ij)) We learn T for each document under the relation loss Lrel(sij,ˆsij). We further constrain T to predict the categorical durations using the binomial distribution model on the durations tk2 implied by T, assuming ⇡k = σ(c log(tk2)). 5 Experiments We implement all models in pytorch 1.0. For all experiments, we use mini-batch gradient descent with batch-size 64 to train the embedding tuner (reducing ELMo to a dimension of 256), attention, and MLP parameters. Both the relation and duration MLP have a single hidden layer with 128 nodes and a dropout probability of 0.5 (see Appendix D for further details). To predict TimeML relations in TempEval3 (TE3; UzZaman et al., 2013, Task C-relation only) and TimeBank-Dense (TD; Cassidy et al., 2014), we use a transfer learning approach. We first use the best-performing model on the UDS-T development set to obtain the relation representation (grelij) for each pair of annotated event-event relations in TE3 and TD (see Appendix E for preprocessing details). We then use this vector as input features to a SVM classifier with a Gaussian 2912 Model Duration Relation Duration Relation Connection ⇢ rank diff. R1 Absolute ⇢ Relative ⇢ R1 softmax X 32.63 1.86 8.59 77.91 68.00 2.82 binomial X 37.75 1.75 13.73 77.87 67.68 2.35 X Dur Rel 22.65 3.08 -51.68 71.65 66.59 -6.09 binomial Dur ! Rel 36.52 1.76 13.17 77.58 66.36 0.85 binomial X Dur ! Rel 38.38 1.75 13.85 77.82 67.73 2.58 binomial X Dur Rel 38.12 1.75 13.68 78.12 68.22 2.96 Table 2: Results on test data based on different model representations; ⇢denotes the Spearman-correlation coefficient; rank-diff is the duration rank difference. The model highlighted in blue performs best on durations and is also close to the top performing model for relations on the development set. The numbers highlighted in bold are the best-performing numbers on the test data in the respective columns. kernel to train on the training sets of these datasets using the feature vector obtained from our model.3 Following recent work using continuous labels in event factuality prediction (Lee et al., 2015; Stanovsky et al., 2017; Rudinger et al., 2018; White et al., 2018) and genericity prediction (Govindarajan et al., 2019), we report three metrics for the duration prediction: Spearman correlation (⇢), mean rank difference (rank diff), and proportion rank difference explained (R1). We report three metrics for the relation prediction: Spearman correlation between the normalized values of actual beginning and end points and the predicted ones (absolute ⇢), the Spearman correlation between the actual and predicted values in Lrel (relative ⇢), and the proportion of MAE explained (R1). R1 = 1 −MAEmodel MAEbaseline where MAEbaseline is always guessing the median. 6 Results Table 2 shows the results of different model architectures on the UDS-T test set, and Table 4 shows the results of our transfer-learning approach on test set of TimeBank-Dense (TD-test). UDS-T results Most of our models are able to predict the relative position of the beginning and ending of events very well (high relation ⇢) and the relative duration of events somewhat well (relatively low duration ⇢), but they have a lot more trouble predicting relation exactly and relatively less trouble predicting duration exactly. 3For training on TE3, we use TimeBank (TB; Pustejovsky et al., 2003) + AQUAINT (AQ; Graff) datasets provided in the TE3 workshop (UzZaman et al., 2013). For training on TD, we use TD-train and TD-dev. Duration model The binomial distribution model outperforms the softmax model for duration prediction by a large margin, though it has basically no effect on the accuracy of the relation model, with the binomial and softmax models performing comparably. This suggests that enforcing concavity in duration rank on the duration probabilities helps the model better predict durations. Connections Connecting the duration and relation model does not improve performance in general. In fact, when the durations are directly predicted from the temporal relation model—i.e. without using the duration representation model— the model’s performance drops by a large margin, with the Spearman correlation down by roughly 15 percentage points. This indicates that constraining the relations model to predict the durations is not enough and that the duration representation is needed to predict durations well. On the other hand, predicting temporal relations directly from the duration probability distribution—i.e. without using the relation representation model—results in a similar score as that of the top-performing model. This indicates that the duration representation is able to capture most of the relation characteristics of the sentence. Using both duration representation and relation representation separately (model highlighted in blue) results in the best performance overall on the UDS-T development set. TimeBank-Dense and TempEval3 Table 4 reports F1-micro scores on the test set of TimeBankDense compared with some other systems as reported by Cheng and Miyao (2017). We report these scores only on Event-Event (E-E) relations as our system captures only those. We also compute the standard temporal awareness F1 score on the test set of TempEval-3 (TE3-PT) considering 2913 Duration Word Attention Rank Freq soldiers 0.911 1.28 69 months 0.844 1.38 264 Nothing 0.777 5.07 114 minutes 0.768 1.33 81 astronauts 0.756 1.37 81 hour 0.749 1.41 84 Palestinians 0.735 1.72 288 month 0.721 2.03 186 cartoonists 0.714 1.35 63 years 0.708 1.94 588 days 0.635 1.39 84 thoughts 0.592 2.90 60 us 0.557 2.09 483 week 0.531 2.23 558 advocates 0.517 2.30 105 Relation Word Attention Rank Freq occupied 0.685 1.33 54 massive 0.522 2.71 66 social 0.510 1.68 57 general 0.410 3.52 168 few 0.394 3.07 474 mathematical 0.393 7.66 132 are 0.387 3.47 4415 comes 0.339 2.39 51 or 0.326 3.50 3137 and 0.307 4.86 17615 emerge 0.305 2.67 54 filed 0.303 7.14 66 s 0.298 4.03 1152 were 0.282 3.49 1308 gets 0.239 7.36 228 Table 3: Mean attention weight, mean attention rank, and frequency for 15 words in the development set with the highest mean duration-attention (left) and relation-attention (right) weights. For duration, the words highlighted in bold directly correspond to some duration class. For relation, the words in bold are either conjunctions or words containing tense information. only E-E relations and achieve a score of 0.498.4 Our system beats the TD F1-micro scores of all other systems reported in Table 4. As a reference, the top performing system on TE3-PT (Ning et al., 2017) reports an F1 score of 0.672 over all relations, but is not directly comparable to our system as we only evaluate on event-event relations. These results indicate that our model is able to achieve competitive performance on other standard temporal classification problems. Systems Evaluation Data F1 (E-E) CAEVO TD-test 0.494 CATENA TD-test 0.519 Cheng and Miyao (2017) TD-test 0.529 This work TD-test 0.566 Table 4: F1-micro scores of event-event relations in TD-test based on our transfer learning experiment. 7 Model Analysis and Timelines We investigate two aspects of the best-performing model on the development set (highlighted in Table 2): (i) what our duration and relation representations attend to; and (ii) how well document timelines constructed from the model’s pre4We do not report the temporal awareness scores (F1) of other systems on TE3-PT, since they report their metrics on all relations, including timex-timex, and event-timex relations, and thus they are not directly comparable. For TD, only those systems are reported that report F1-micro scores. dictions match those constructed from the annotations. (See Appendix F for further analyses.) Attention The advantage of using an attention mechanism is that we can often interpret what linguistic information the model is using by analyzing the attention weights. We extract these attention weights for both the duration representation and the relation representation from our best model on the development set. Duration We find that words that denote some time period—e.g. month(s), minutes, hour, years, days, week—are among the words with highest mean attention weight in the duration model, with seven of the top 15 words directly denoting one of the duration classes (Table 3). This is exactly what one might expect this model to rely heavily on, since time expressions are likely highly informative for making predictions about duration. It also may suggest that we do not need to directly encode relations between event-referring and time-referring expressions in our framework—as do annotation standards like TimeML—since our models may discover them. The remainder of the top words in the duration model are plurals or mass nouns (soldiers, thoughts etc.). This may suggest that the plurality of a predicate’s arguments is an indicator of the likely duration of the event referred to 2914 by that predicate. To investigate this possibility, we compute a multinomial regression predicting the attention weights ↵s for each sentence s from the K morphological features of each word in that sentence Fs 2 {0, 1}length(s)⇥K, which are extracted from the UD-EWT features column and binarized. To do this, we optimize coefficients c in argc min P s D (↵s k softmax (Fsc)), where D is the KL divergence. We find that the five most strongly weighted positive features in c are all features of nouns—NUMBER=plur, CASE=acc, PRONTYPE=prs, NUMBER=sing, GENDER=masc—suggesting that good portion of duration information can be gleaned from the arguments of a predicate. This may be because nominal information can be useful in determining whether the clause is about particular events or generic events (Govindarajan et al., 2019). Relation A majority of the words with highest mean attention weight in the relation model are either coordinators—such as or and and—or bearers of tense information—i.e. lexical verbs and auxiliaries. The first makes sense because, in context, coordinators can carry information about temporal sequencing (see Wilson and Sperber, 1998, i.a.). The second makes sense in that information about the tense of predicates being compared likely helps the model determine relative ordering of the events they refer to. Similar to duration attention analysis, for relation attention, we find that the five most strongly weighted positive features in c are all features of verbs or auxiliaries—PERSON=1, PERSON=3, TENSE=pres, TENSE=past, MOOD=ind— suggesting that a majority of the information relevant to relation can be gleaned from the tensebearing units in a clause. Document timelines We apply the document timeline model described in §4 to both the annotations on the development set and the bestperforming model’s predictions to obtain timelines for all documents in the development set. Figure 7 shows an example, comparing the two resulting document timelines. For these two timelines, we compare the induced beginning points and durations, obtaining a mean Spearman correlation of 0.28 for beginning points and -0.097 for durations. This suggests that the model agrees to some extent with the annotations about the beginning points of events in most documents but is struggling to find the correct duwas lower than got rate showed was great took recommend go explain Figure 7: Learned timeline for the following document based on actual (black) and predicted (red) annotations: “A+. I would rate Fran pcs an A + because the price was lower than everyone else , i got my computer back the next day , and the professionalism he showed was great . He took the time to explain things to me about my computer , i would recommend you go to him. David” ration spans. One possible reason for poor prediction of durations could be the lack of a direct source of duration information. The model currently tries to identify the duration based only on the slider values, which leads to poor performance as already seen in one of the Dur Rel model. 8 Conclusion We presented a novel semantic framework for modeling fine-grained temporal relations and event durations that maps pairs of events to realvalued scales for the purpose of constructing document-level event timelines. We used this framework to construct the largest temporal relations dataset to date – UDS-T – covering the entirety of the UD-EWT. We used this dataset to train models for jointly predicting fine-grained temporal relations and event durations, reporting strong results on our data and showing the efficacy of a transfer-learning approach for predicting standard, categorical TimeML relations. Acknowledgments We are grateful to the FACTS.lab at the University of Rochester as well as three anonymous reviewers for useful comments on this work. This research was supported by the University of Rochester, JHU HLTCOE, and DARPA AIDA. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government. 2915 References James F Allen. 1983. Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11):832–843. James F Allen and Patrick J Hayes. 1985. A commonsense theory of time. In Proceedings of the 9th International Joint Conference on Artificial IntelligenceVolume 1, pages 528–531. Morgan Kaufmann Publishers Inc. Steven Bethard. 2013. Cleartk-timeml: A minimalist approach to tempeval 2013. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), volume 2, pages 10–14. Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English web treebank. Linguistic Data Consortium, Philadelphia, PA. Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 501–506. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics, 2:273– 284. Nathanael Chambers and Dan Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 698–706. Association for Computational Linguistics. Fei Cheng and Yusuke Miyao. 2017. Classifying temporal relations by bidirectional lstm over dependency paths. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 1–6. Marie-Catherine De Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Universal Stanford dependencies: A cross-linguistic typology. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), volume 14, pages 4585–4592. Pascal Denis and Philippe Muller. 2011. Predicting globally-coherent temporal structures from texts via endpoint inference and graph decomposition. In IJCAI-11-International Joint Conference on Artificial Intelligence. Dmitriy Dligach, Timothy Miller, Chen Lin, Steven Bethard, and Guergana Savova. 2017. Neural temporal relation extraction. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 746–751. Quang Xuan Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 677– 687. Association for Computational Linguistics. Jennifer D’Souza and Vincent Ng. 2013. Classifying temporal relations with rich linguistic knowledge. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 918–927. Elena Filatova and Eduard Hovy. 2001. Assigning time-stamps to event-clauses. In Proceedings of the Workshop on Temporal and Spatial Information Processing-Volume 13, page 13. Association for Computational Linguistics. Antske Fokkens, Marieke van Erp, Piek Vossen, Sara Tonelli, Willem Robert van Hage, Luciano Serafini, Rachele Sprugnoli, and Jesper Hoeksema. 2013. GAF: A grounded annotation framework for events. In Workshop on Events: Definition, Detection, Coreference, and Representation, pages 11–20. Venkata Subrahmanyan Govindarajan, Benjamin Van Durme, and Aaron Steven White. 2019. Decomposing generalization: Models of generic, habitual, and episodic statements. arXiv preprint arXiv:1901.11429. David Graff. The aquaint corpus of English news text:[content copyright] Portions c⃝1998-2000 New York Times, Inc., c⃝1998-2000 Associated Press, Inc., c⃝1996-2000 Xinhua News Service. Linguistic Data Consortium. Andrey Gusev, Nathanael Chambers, Pranav Khaitan, Divye Khilnani, Steven Bethard, and Dan Jurafsky. 2011. Using query patterns to learn the duration of events. In Proceedings of the Ninth International Conference on Computational Semantics, pages 145–154. Association for Computational Linguistics. Jerry R Hobbs, William Croft, Todd Davies, Douglas Edwards, and Kenneth Laws. 1987. Commonsense metaphysics and lexical semantics. Computational Linguistics, 13(3-4):241–250. Yu Hong, Tongtao Zhang, Tim O’Gorman, Sharone Horowit-Hendler, Heng Ji, and Martha Palmer. 2016. Building a cross-document event-event relation corpus. In Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with Association for Computational Linguistics 2016 (LAW-X 2016), pages 1–6. 2916 Harold Hotelling. 1936. Relations between two sets of variates. Biometrika, 28(3/4):321–377. Chung Hee Hwang and Lenhart K Schubert. 1994. Interpreting tense, aspect and time adverbials: A compositional, unified approach. In Temporal Logic, pages 238–264. Springer. Leslie Lamport. 1978. Time, clocks, and the ordering of events in a distributed system. Communications of the ACM, 21(7):558–565. Natsuda Laokulrat, Makoto Miwa, and Yoshimasa Tsuruoka. 2016. Stacking approach to temporal relation classification with temporal inference. Information and Media Technologies, 11:53–78. Kenton Lee, Yoav Artzi, Yejin Choi, and Luke Zettlemoyer. 2015. Event detection and factuality assessment with non-expert supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643–1648. Artuur Leeuwenberg and Marie-Francine Moens. 2018. Temporal information extraction by predicting relative time-lines. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1237–1246. Tuur Leeuwenberg and Marie-Francine Moens. 2017. Structured learning for temporal relation extraction from clinical records. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 1150– 1158. Chen Lin, Dmitriy Dligach, Timothy A Miller, Steven Bethard, and Guergana K Savova. 2015. Multilayered temporal modeling for the clinical domain. Journal of the American Medical Informatics Association, 23(2):387–395. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee, and James Pustejovsky. 2006. Machine learning of temporal relations. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 753–760. Association for Computational Linguistics. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Anne-Lyse Myriam Minard, Manuela Speranza, Ruben Urizar, Begona Altuna, Marieke van Erp, Anneleen Schoen, and Chantal van Son. 2016. Meantime, the newsreader multilingual event and time corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA). Marvin Minsky. 1975. A framework for representing knowledge. The Psychology of Computer Vision. Paramita Mirza and Sara Tonelli. 2016. Catena: Causal and temporal relation extraction from natural language texts. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 64–75. Qiang Ning, Zhili Feng, and Dan Roth. 2017. A structured learning approach to temporal relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1027–1037. Qiang Ning, Zhili Feng, Hao Wu, and Dan Roth. 2018. Joint reasoning for temporal and causal relations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2278–2288. Joakim Nivre, Zeljko Agic, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Cristina Bosco, Sam Bowman, Giuseppe G. A. Celano, Miriam Connor, Marie-Catherine de Marneffe, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Toma Erjavec, Richrd Farkas, Jennifer Foster, Daniel Galbraith, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Berta Gonzales, Bruno Guillaume, Jan Haji, Dag Haug, Radu Ion, Elena Irimia, Anders Johannsen, Hiroshi Kanayama, Jenna Kanerva, Simon Krek, Veronika Laippala, Alessandro Lenci, Nikola Ljubei, Teresa Lynn, Christopher Manning, Ctlina Mrnduc, David Mareek, Hctor Martnez Alonso, Jan Maek, Yuji Matsumoto, Ryan McDonald, Anna Missil, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Shunsuke Mori, Hanna Nurmi, Petya Osenova, Lilja vrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Slav Petrov, Jussi Piitulainen, Barbara Plank, Martin Popel, Prokopis Prokopidis, Sampo Pyysalo, Loganathan Ramasamy, Rudolf Rosa, Shadi Saleh, Sebastian Schuster, Wolfgang Seeker, Mojgan Seraji, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk, Kiril Simov, Aaron Smith, Jan tpnek, Alane Suhr, Zsolt Sznt, Takaaki Tanaka, Reut Tsarfaty, Sumire Uematsu, Larraitz Uria, Viktor Varga, Veronika Vincze, Zdenk abokrtsk, Daniel Zeman, and Hanzhi Zhu. 2015. Universal Dependencies 1.2. http://universaldependencies.github.io/docs/. Tim O’Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer event description: Integrating event coreference with temporal, causal and bridging annotation. In Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016), pages 47–56. 2917 Feng Pan, Rutu Mulkar-Mehta, and Jerry R Hobbs. 2007. Modeling and learning vague event durations for temporal reasoning. In Proceedings of the 22nd National Conference on Artificial IntelligenceVolume 2, pages 1659–1662. AAAI Press. Fabian Pedregosa, Gal Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The timebank corpus. In Corpus linguistics, volume 2003, page 40. Lancaster, UK. Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. arXiv preprint arXiv:1804.02472. Roger C Schank and Robert P Abelson. 1975. Scripts, plans, and knowledge. In Proceedings of the 4th International Joint Conference on Artificial Intelligence-Volume 1, pages 151–157. Morgan Kaufmann Publishers Inc. Natalia Silveira, Timothy Dozat, Marie-Catherine De Marneffe, Samuel R Bowman, Miriam Connor, John Bauer, and Christopher D Manning. 2014. A gold standard dependency corpus for english. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC2014), pages 2897–2904. Gabriel Stanovsky, Judith Eckle-Kohler, Yevgeniy Puzikov, Ido Dagan, and Iryna Gurevych. 2017. Integrating deep linguistic features in factuality prediction over unified datasets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 352–357. William F Styler IV, Steven Bethard, Sean Finan, Martha Palmer, Sameer Pradhan, Piet C de Groen, Brad Erickson, Timothy Miller, Chen Lin, Guergana Savova, et al. 2014. Temporal annotation in the clinical domain. Transactions of the Association for Computational Linguistics, 2:143. Julien Tourille, Olivier Ferret, Aurelie Neveol, and Xavier Tannier. 2017. Neural architecture for temporal relation extraction: A bi-lstm approach for detecting narrative containers. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 224–230. Naushad UzZaman, Hector Llorens, Leon Derczynski, James Allen, Marc Verhagen, and James Pustejovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), volume 2, pages 1–9. Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. Semeval-2007 task 15: Tempeval temporal relation identification. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 75–80. Association for Computational Linguistics. Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: Tempeval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 57– 62. Association for Computational Linguistics. Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1713–1723, Austin, TX. Association for Computational Linguistics. Aaron Steven White, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2018. Lexicosyntactic inference in neural models. arXiv preprint arXiv:1808.06232. Jennifer Williams and Graham Katz. 2012. Extracting and modeling durations for habits and events from twitter. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 223–227. Association for Computational Linguistics. Deirdre Wilson and Dan Sperber. 1998. Pragmatics and time. Pragmatics and Beyond New Series, pages 1–22. Katsumasa Yoshikawa, Sebastian Riedel, Masayuki Asahara, and Yuji Matsumoto. 2009. Jointly identifying temporal relations with markov logic. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 405–413. Association for Computational Linguistics. Sheng Zhang, Rachel Rudinger, and Benjamin Van Durme. 2017. An evaluation of predpatt and open ie via stage 1 semantic role labeling. In IWCS 2017—12th International Conference on Computational Semantics (Short papers). 2918 A Data Collection We concatenate two adjacent sentences to form a combined sentence which allows us to capture inter-sentential temporal relations. Considering all possible pairs of events in the combined sentence results into an exploding number of event-event comparisons. Therefore, to reduce the total number of comparisons, we find the pivot-predicate of the antecedent of the combined sentence as follows - find the root predicate of the antecedent and if it governs a CCOMP, CSUBJ, or XCOMP, follow that dependency to the next predicate until a predicate is found that doesn’t govern a CCOMP, CSUBJ, or XCOMP. We then take all pairs of the antecedent predicates and pair every predicate of the consequent only with the pivot-predicate. This results into 'N 2 ( +M predicates instead of 'N+M 2 ( per sentence, where N and M are the number of predicates in the antecedent and consequent respectively. This heuristic allows us to find a predicate that loosely denotes the topic being talked about in the sentence. Figure 8 shows an example of finding the pivot predicate. Figure 8: Our heuristic finds fly as (the root of) the pivot predicate in Has anyone considered that perhaps George Bush just wanted to fly jets? B Rejecting Annotations We design multiple checks to detect potentially bad annotations during our data collection. A single assignment contains 5 annotations (predicatepairs). Once an annotation is flagged by any of these checks, we may accept or reject the assignment based on our subjective opinion about the particular case. Annotations are flagged based on the following conditions: B.1 Time completion Our pilot studies indicate a median time of roughly 4 minutes to complete a single assignment (5 annotations). We automatically reject any assignFigure 9: An example illustrating an inconsistency between the annotated slider positions and the durations ment which is completed under a minute as we believe that it is not plausible to finish the assignment within a minute. We find that such annotations mostly had default values annotated. B.2 Same slider values If all the beginning points and end-points in an assignment have the same values, we automatically reject those assignments. B.3 Same duration values Sometimes we encounter cases where all duration values in an assignment are annotated to have the same value. This scenario , although unlikely, could genuinely be an instance of correct annotation. Hence we manually check for these cases and reject only if the annotations look dubious in nature based on our subjective opinion. B.4 Inconsistency between the slider positions and durations Our protocol design allows us to detect potentially bad annotations by detecting inconsistency between the slider positions (beginning and endpoints) and the duration values of events in an annotated sentence. The annotator in Figure 9 assigns slider values for e1 (think) as [7,60] i.e. a time-span of 53 and assigns its duration as minutes. But at the same time, the slider values for e2 (do) are annotated as [50,60] i.e. a time-span of 10, even though its duration is assigned as years. This is an inconsistency as e2 has a smaller timespan denoted by the sliders but has the longer duration as denoted by years. We reject assignments where more than 60% of annotations have this inconsistency. 2919 C Inter-annotator agreement Annotators were asked to approximate the relative duration of the two events that they were annotating using the distance between the sliders. This means that an annotation is coherent insofar as the ratio of distances between the slider responses for each event matches the ratio of the categorical duration responses. We rejected annotations wherein there was gross mismatch between the categorical responses and the slider responses — i.e. one event is annotated as having a longer duration but is given a shorter slider response — but because this does not guarantee that the exact ratios are preserved, we assess that here using a canonical correlation analysis (CCA; Hotelling 1936) between the categorical duration responses and the slider responses. Duration Relation dur(e1) dur(e2) beg(e1) end(e1) beg(e2) end(e2) CC2 CC1 0.5 0.0 −0.5 Figure 10: Scores from canonical correlation analysis comparing categorical duration annotations and slider relation annotations. Figure 10 shows the CCA scores. We find that the first canonical correlation, which captures the ratios between unequal events, is 0.765; and the second, which captures the ratios between roughly unequal events, is 0.427. This preservation of the ratios is quite impressive in light of the fact that our slider scales are bounded; though we hoped for at least a non-linear relationship between the categorical durations and the slider distances, we did not expect such a strong linear relationship. D Confidence Ratings Annotators use the confidence scale in different ways. Some always respond with totally confident whereas others use all five options. To cater to these differences, we normalize the confidence ratings for each event-pair using a standard ordinal scale normalization technique known as ridit scoring. In ridit scoring ordinal labels are mapped to (0, 1) using the empirical cumulative distribution function of the ratings given by each annotator. Ridit scoring re-weights the importance of a scale label based on the frequency of its usage. We weight both Ldur, and Lrel by the riditscored confidence ratings of event durations and event relations, respectively. E Processing TempEval3 and TimeBank-Dense Since we require spans of predicates for our model, we pre-process TB+AQ and TD by removing all xml tags from the sentences and then we pass it through Stanford CoreNLP 3.9.2 (Manning et al., 2014) to get the corresponding conllu format. Roots and spans of predicates are then extracted using PredPatt. To train the SVM classifier, we use sklearn 0.20.0; Pedregosa et al. 2011. We run a hyperparameter grid-search over 4-fold CV with C: (0.1, 1, 10), and gamma: (0.001, 0.01, 0.1, 1). The best performance on cross-validation (C=1 and gamma=0.001) is then evaluated on the test set of TE3 i.e. TE3-Platinum (TE3-PT), and TD-test. For our purposes, the identity and simultaneous relations in TB+AQ are equivalent when comparing event-event relations. Hence, they are collapsed into one single relation. F Further analysis We rotate the predicted slider positions in the relation space defined in §3 and compare it with the rotated space of actual slider positions. We see a Spearman correlation of 0.19 for PRIORITY, 0.23 for CONTAINMENT, and 0.17 for EQUALITY. This suggests that our model is best able to capture CONTAINMENT relations and slightly less good at capturing PRIORITY and EQUALITY relations, though all the numbers are quite low compared to the absolute ⇢and relative ⇢metrics reported in Table 2. This may be indicative of the fact that our models do somewhat poorly on predicting more fine-grained aspects of an event relation, and in the future it may be useful to jointly train against the more interpretable PRIORITY, CONTAINMENT, and EQUALITY measures instead of or in conjunction with the slider values.
2019
280
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2920–2930 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2920 FIESTA: Fast IdEntification of State-of-The-Art models using adaptive bandit algorithms Henry B. Moss STOR-i Centre for Doctoral Training, Lancaster University Andrew Moore School of Computing and Communications, Lancaster University [email protected] David S. Leslie Dept. of Mathematics and Statistics, Lancaster University Paul Rayson School of Computing and Communications, Lancaster University Abstract We present FIESTA, a model selection approach that significantly reduces the computational resources required to reliably identify state-of-the-art performance from large collections of candidate models. Despite being known to produce unreliable comparisons, it is still common practice to compare model evaluations based on single choices of random seeds. We show that reliable model selection also requires evaluations based on multiple train-test splits (contrary to common practice in many shared tasks). Using bandit theory from the statistics literature, we are able to adaptively determine appropriate numbers of data splits and random seeds used to evaluate each model, focusing computational resources on the evaluation of promising models whilst avoiding wasting evaluations on models with lower performance. Furthermore, our userfriendly Python implementation produces confidence guarantees of correctly selecting the optimal model. We evaluate our algorithms by selecting between 8 target-dependent sentiment analysis methods using dramatically fewer model evaluations than current model selection approaches. 1 Introduction and Background Natural Language Processing (NLP) is a field driven by empirical evaluations. Authors are under pressure to demonstrate that their models or methods achieve state-of-the-art performance on a particular task or dataset, which by definition requires reliable model comparison. As models become more numerous, require larger computational resources to train, and the performance of competing models gets closer, the task of reliable model selection has not only become more important, but also increasingly difficult. Without full disclosure of model settings and data splits, it is impossible to accurately compare methods and models. To be able to perform meaningful model comparisons, we need to be able to reliably evaluate models. Unfortunately, evaluating a model is a non-trivial task and the best we can do is to produce noisy estimates of model performance with the following two distinct sources of stochasticity: 1. We only have access to a finite training dataset, however, evaluating a model on its training data leads to severe over-estimates of performance. To evaluate models without over-fitting, practitioners typically randomly partitioning data into independent training and testing sets, producing estimates that are random quantities with often high variability for NLP problems (Moss et al., 2018). Although methods like bootstrapping (Efron and Tibshirani, 1994) and leave-oneout cross validation (Kohavi, 1995) can provide deterministic estimates of performance, they require the fitting of a large number of models and so are not computationally feasible for the complex models and large data prevalent in NLP. Standard NLP model evaluation strategies range from using a simple (and computationally cheap) single train-test split, to the more sophisticated K-fold cross validation, CV (Kohavi, 1995). 2. The vast majority of recent NLP models are non-deterministic and so their performance has another source of stochasticity, controlled by the choice of random seed during training. Common sources of model instability in modern NLP include weight initialisation, data sub-sampling for stochastic gradient calculation, negative sampling used to train word embeddings (Mikolov et al., 2013) and feature sub-sampling for ensemble methods. In particular, the often state-of-the-art LSTMs (and its many variants) have been 2921 shown to exhibit high sensitivity to random seeds (Reimers and Gurevych, 2017). For reliable model selection, it is crucial to take into account both sources of variability when estimating model performance. Observing a higher score for one model could be a consequence of a particularly non-representative train-test split and/or random seed used to evaluate the model rather than a genuine model improvement. This subtlety is ignored by large scale NLP competitions such as SemEval with evaluations based on a pre-determined train-test split. Although more precise model evaluations can be obtained with higher computation, calculating overly precise model evaluations is a huge waste of computational resource. On the other hand, our evaluations need to provide reliable conclusions (with only a small probability of selecting a sub-optimal model). It is poorly understood how to choose an appropriate evaluation strategy for a given model selection problem. These are task specific, depending on model stability, the closeness in performance of competing models and subtle properties of the data such as the representativeness of train-test splits. In contrast to common practice, we consider model selection as a sequential process. Rather than using a fixed evaluation strategy for each model (which we refer to as a non-adaptive approach), we start with a cheap evaluation of each model on just a single train-test split, and then cleverly choose where to allocate further computational resources based on the observed evaluations. If we decide to further test a promising model, we calculate an additional evaluation based on another data split and seed, observing both sources of evaluation variability and allowing reliable assessments of performance. To perform sequential model fitting, we borrow methods from the multi-armed-bandit (MAB) statistical literature (Lai and Robbins, 1985). This field covers problems motivated by designing optimal strategies for pulling the arms of a bandit (also known as a slot machine) in casinos. Each arm produces rewards from different random distributions which the user must learn by pulling arms. In particular, model selection is equivalent to the problem of best-arm-identification; identifying the arm with the highest mean. Although appearing simple at a first glance, this problem is deceptively complex and has provided motivation for efficient algorithms in a wide range of domains, including clinical trials (Villar et al., 2015) and recommendation systems (Li et al., 2010). Although we believe that we are the first to use bandits to reduce the cost and improve the reliability of model selection, we are not the first to use them in NLP. Recent work in machine translation makes use of another major part of the MAB literature, seeking to optimise the long-term performance of translation algorithms (Nguyen et al., 2017; Sokolov et al., 2016; Lawrence et al., 2017). Within NLP, our work is most similar to Haffari et al. (2017), who use bandits to minimise the number of data queries required to calculate the F-scores of models. However, this work does not consider the stochasticity of the resulting estimates or easily extend to other evaluation metrics. The main contribution of this paper is the application of three intuitive algorithms to model selection in NLP, alongside a user-friendly Python implementation: FIESTA (Fast IdEntification of State-of-The-Art)1. We can automatically identify an optimal model from large collections of candidate models to a user-chosen confidence level in a small number of model evaluations. We focus on three distinct scenarios that are of interest to the NLP community. Firstly, we consider the fixed budget (FB) model selection problem (Section 4.1), a situation common in industry, where a fixed quota of computational resources (or time constraints for real-time decisions) must be appropriately allocated to identify an optimal model with the highest possible confidence. In contrast, we also consider the fixed confidence (FC) problem (Section 4.2), which we expect to be of more use for researchers. Here, we wish to claim with a specified confidence level that our selected model is state-of-the-art against a collection of competing models using the minimal amount of computation. Finally, we also consider an extension to the FC scenario, where a practitioner has the computational capacity to fit multiple models in parallel. We demonstrate the effectiveness of our procedures over current model selection approaches when identifying an optimal target-dependent sentiment analysis model from a set of eight competing candidate models (Section 5). 1https://github.com/apmoore1/fiesta 2922 2 Motivating example We now provide evidence for the need to vary both data splits and random seeds for reliable model selection. We extend the motivating example used in the work of Reimers and Gurevych (2017), comparing two LSTM-based Named Entity Recognition (NER) models by Ma and Hovy (2016) and Lample et al. (2016), differing only in character representation (via a CNN and a LSTM respectively). We base model training on Ma and Hovy (2016), however, following the settings of Yang et al. (2018) we use a batch size of 64, a weight decay of 10e−9 and removed momentum. We ran each of the NER models five times with a different random seed on 150 different train, validation, and test splits2. Reimers and Gurevych (2017) showed the effect of model instability between these two models, where changing the model’s random seeds can lead to drawing different conclusions about which model performed best. We extend this argument by showing that different conclusions can also be drawn if we instead vary the train-test split used for the model evaluation (Figure 1). We see that while data splits 0 and 2 correctly suggest that the LSTM is optimal, using data split 1 suggests the opposite. Therefore, it is clear that we must vary both the random seeds and train-test splits used to evaluate our models if we want reliable model selection. 3 Problem Statement Extending notation from Arcuri and Briand (2014), we can precisely state the task of selecting between a collection of N candidate models S = {m1, m2, ..mN} as finding m∗= argmax m∈S M(m). (1) m∗is the best model according to some chosen evaluation metric M that measures the performance of that model, e.g accuracy, F-score or AUC (for an summary of model evaluation metrics see Friedman et al. (2001)). As already argued, Equation (1) paints an overly simplistic picture of model selection. In reality we only have access to noisy realisations of the true model score M(m) and direct comparisons of single realisations of random variables are 2The original CoNLL data was split with respect to time rather than random sub-sampling, explaining the discrepancy with previous scores on this dataset using the same models. Figure 1: The left plot shows the distribution of results when varying the data splits and random seeds, with the dashed lines representing the quartile values. The three right plots each represent a different single data split over five runs on different random seeds. The lines represent a single run result. unreliable. Therefore, we follow the arguments of Reimers and Gurevych (2018) and consider a meaningful way of comparing noisy model evaluations: namely, finding the model with largest expected performance estimate across different train-test splits and random seeds. Defining the mean performance of model m as µm, we see that the task of model selection is equivalent to the accurate learning and comparison of these N unknown means: m∗= argmax m∈S µm. We can now set up the sequential framework of our model selection procedure and precisely state what we mean by reliable model selection. At each step in our algorithm we choose a model to evaluate and sample a performance estimate by randomly generating a data split and random seed. After collecting evaluations, we can calculate sample means for each model, which we denote as ˆµm. After running our algorithm for T steps, reliable model selection corresponds to knowing how confident we should be that our chosen model ˆmT = argmax ˆµm is in fact the true optimal model m∗, i.e. we wish to make a precise statement of the form; P ( ˆmT = m∗) ≥1 −δ, (2) where 1 −δ represents this confidence. 2923 In Section 1 we motivated two distinct goals of a sequential model selection routine, which we can now state as: 1. Fixed budget model selection (FB): We wish to find the best model using only a fixed budget of T model evaluations. The aim is to collect the T evaluations that allow us to claim (2) with the largest possible confidence level 1 −δ. 2. Fixed confidence model selection (FC): We wish to find the best model to a pre-specified confidence level. The aim is to collect the minimal number of model evaluations that allow us to claim (2). Although an algorithm designed to do well in one of these scenarios will likely also do well in the other, we will see that to achieve the best performance at either FB or FC model selection, we require subtly different algorithms. 4 Algorithms We now examine model selection from a bandit viewpoint, summarising three bandit algorithms and relating their use to three distinct model selection scenarios. Although the underpinning theoretical arguments for these algorithms are beyond the scope of this work, we do highlight one point that is relevant for model selection; that scenarios enjoying the largest efficiency gains from moving to adaptive algorithms are those where only a subset of arms have performance close to optimal (Jamieson et al., 2013). Model selection in NLP is often in this scenario, with only a small number of considered models being close to state-of-the-art, and so (as we demonstrate in Section 5) NLP has a lot to gain from using our adaptive model selection algorithms. 4.1 Fixed Budget by Sequential Halving FB best-arm identification algorithms are typically based on successively eliminating arms until just a single (ideally) optimal arm remains (Jamieson et al., 2013; Jamieson and Nowak, 2014; Audibert and Bubeck, 2010). We focus on the sequential halving (SH) algorithm of Karnin et al. (2013) (Algorithm 1). Here we break our model selection routine into a series of  log2N  rounds, each discarding the least promising half of our candidate model set, eventually resulting in a single remaining model. Our computational Algorithm 1 Sequential Halving for Fixed Budget Model Selection Require: Computational Budget T, Set of N candidate models S while |S| ̸= 1 do Evaluate each model m in S j T |S|⌈log2 N⌉ k times Update the empirical means ˆµm Remove  |S| 2  models with worst ˆµm from S end while return Chosen model S budget T is split equally among the rounds to be equally budgeted among the models remaining in that round. This allocation strategy ensures an efficient use of resources, for example the surviving final two models are evaluated 2  log2N  −1 times as often as the models eliminated in the first round. An example run of the algorithm is summarised in Table 1. Round Candidate Models # Evaluations 1 S = {m1, m2, m3, m4} 2 2 S = {m2, m4} 4 output: S = {m2} Table 1: An example of sequential elimination selecting between four models with a budget of T = 16. After two evaluations of each model, two models are eliminated. The remaining budget is then used to reliably decide between the remaining pair. Standard practice would evaluate each model an equal four times, wasting computational resources on sub-optimal models. In the bandit literature (Karnin et al., 2013), this algorithm is shown to have strong theoretical guarantees of reliably choosing the optimal arm, as long as the reward-distributions for each arm are bounded (limited to some finite range). This is not a restrictive assumption for NLP, as the majority of common performance metrics are bounded, for example accuracy, recall, precision and F-score are all constrained to lie in [0, 1]. We will demonstrate the effectiveness of sequential halving for model selection in Section 5. 4.2 Fixed Confidence by TTTS For fixed confidence model selection, where we wish to guarantee the selection of an optimal arm 2924 at a given confidence level, we cannot just discard arms that are likely to be sub-optimal without accurately estimating this likelihood of suboptimality. Although approaches that sequentially eliminate arms (like our sequential halving algorithm) do exist for FC best-arm identification (Jamieson et al., 2014; Karnin et al., 2013; Audibert and Bubeck, 2010; Even-Dar et al., 2002), the best theoretical guarantees for the FC problem come from algorithms that maintain the ability to sample any arm at any point in the selection procedure (Garivier and Kaufmann, 2016; Jamieson and Nowak, 2014). Rather than seeking to eliminate half the considered models at regular intervals of computation, a model is only evaluated until we can be sufficiently confident that it is suboptimal. Unfortunately, the performance guarantees for these methods are asymptotic results (in the number of arms and the number of arm pulls) and have little practical relevance to the (at most) tens of arms in a model selection problem. Our practical recommendation for FC model selection is a variant of the well-known Bayesian sampling algorithm, Thompson sampling, known as top-two Thompson sampling (TTTS) (Russo, 2016). We will see that this algorithm can efficiently allocate computational resources to quickly find optimal models. Furthermore, this approach provides full uncertainty estimation over the final choice of model, providing the confidence guarantees required for FC model selection. Our implementation makes the assumption that the evaluations of each model roughly follow a Gaussian distribution, with different means and variances. Although such assumptions are common in the model evaluation literature (Reimers and Gurevych, 2018) and for statistical testing in NLP (Dror et al., 2018), they could be problematic for the bounded metrics common in NLP. Therefore we also experimented with modelling the logit transformation of our evaluations, mapping our evaluation metric to the whole real line. However, for our examples of Section 5 we found that this mapping provided a negligible improvement in reliability and so was not worth including in our experimental results. This may not be the case for other tasks or less well-behaved evaluation metrics and so we include this functionality in the FIESTA package. 3We enforce a minimum of three evaluations to ensure that the t distribution in our posterior remains well-defined Algorithm 2 Top-Two Thompson Sampling Require: Desired Confidence 1 −δ, Set of N candidate models S Initialise a uniform belief π Evaluate each model in S three times 3 Update belief π while maxm∈S πm ≤1 −δ do Sample distinct m1 and m2 according to π Randomly choose between m1 and m2 Evaluate chosen model Update belief π end while return Chosen model argmaxm∈S πm To provide efficient model selection, we use our current believed probability that a given model is optimal πm = P (m∗= m) (producing a distribution over the models π = {π1, .., πN}) to drive the allocation of computational resources. Standard Thompson sampling is a stochastic algorithm that generates a choice of model by sampling from our current belief π, i.e. choosing to evaluate a model with the same probability that we believe is optimal (see Russo et al. (2018) for a concise introduction). Although this strategy allows us to focus computation on promising arms, it actually does so too aggressively. Once we believe that an arm is optimal with reasonably high confidence, computation will be heavily focused on evaluating this arm even though we need to become more confident about the sub-optimality of competing models to improve our confidence level. This criticism motivates our chosen algorithm TTTS (summarised in Algorithm 2), where instead of sampling a single model according to π, we sample two distinct models. We then uniformly choose between these two models for the next evaluation, allowing a greater exploration of the arms and much improved rates of convergence to the desired confidence level (Russo, 2016). We use this new evaluation to update our belief and continue making evaluations until we believe that a model is optimal with a higher probability than 1 −δ and terminate the algorithm. An example run of TTTS is demonstrated on a synthetic example in Figure 2, where we simulate from 5 Gaussian distributions with means {0.65, 0.69, 0.69, 0.70, 0.71} and standard deviation 0.01 to mimic accuracy measurements for a model selection problem. We now explain how we calculate π (our be2925 Figure 2: TTTS seeking the optimal model with confidence 0.99 from 5 synthetic models. The background represents our evolving belief π in the optimal model and the lines represent the proportion of the total evaluations made on each model. We start evaluating the models uniformly but our adaptive algorithm quickly focuses resources on the best models. lief in the location of the optimal model) using well-known results from Bayesian decision theory (see Berger (2013) for a comprehensive coverage). As justified earlier, we assume that the evaluations of model m are independently distributed with a Gaussian distribution N(µm, σ2 m) for some unknown mean µm and variance σ2 m. Although we are primarily interested in learning µm, we must also learn σ2 m in order to make confidence guarantees about the optimality of our selected model. Therefore, as well as keeping track of the sample means for the evaluations of each model ˆµm, we also keep track of the sample variances ˆSm and counters Tm of the number of times each model has been evaluated. To facilitate inference, we choose a uniform prior for the unknown µm and σm. Not only is this a conjugate prior for Gaussian likelihoods, but it is also shown to encourage beneficial exploratory behaviour when using Thompson sampling on Gaussian bandit problems (Honda and Takemura, 2014) and so allows fast identification of optimal arms (or models). After observing Tm evaluations of each model and producing estimates ˆµm and ˆSm, our posterior belief for each deviation between the true and observed model means µm −ˆµm satisfies (as derived in (Honda and Takemura, 2014)); s Tm(Tm −2) ˆSm (µm −ˆµm) | ˆµm, ˆSm ∼tTm−2, where td is a Student’s t-distribution with d degrees of freedom. π is then defined as the probability vector, such that πm is the relative probability that µm is the largest according to this posterior belief. Unfortunately, there is no closed form expression for the maximum of N t-distributions and so FIESTA uses a simple Monte-Carlo approximation based on the sample maxima of repeated draws from our posteriors for µm. In practice this is very accurate and did not slow down our experiments, especially in comparison to the time saved by reducing the number of model evaluations. 4.3 Batch Fixed Confidence by BTS NLP practitioners often have the computational capacity to fit models in parallel across multiple workers, evaluating multiple models or the same model across multiple seeds at once. Their model selection routines must therefore provide batches of models to evaluate. Our proposed solution to FB model selection naturally provides such batches, with each successive round of SH producing a collection of model evaluations that can be calculated in parallel. Unfortunately, TTTS for FC model selection successively chooses and then waits for the evaluation of single models and so is not naturally suited to parallelism. Extending TTTS to batch decision making is an open problem in the MAB literature. Therefore, we instead consider batch Thompson sampling (BTS), an extension of standard Thompson sampling (as described in Section 4.2) to batch sampling from the related field of Bayesian optimisation (Kandasamy et al., 2018). At each step in our selection process we take B model draws according to our current belief π that the model is optimal, where B represents our computational capacity. This is in contrast to the single draw in standard Thompson sampling and the drawn pair in TTTS. In addition, this approach extends to the asynchronous setting, where rather than waiting for the whole batch of B models to be evaluated before choosing the next batch, each worker can draw a new model to evaluate according to the updated π. This flexibility provides an additional efficiency gain for problems where the different models have a wide range of run times. 2926 5 Experiments We now test our three algorithms on a challenging model selection task typical of NLP, selecting between eight Target Dependent Sentiment Analysis (TDSA) models based on their macro F1 score. We consider two variants of four reimplementations of well-known TDSA models: ATAE (Wang et al., 2016), IAN (Ma et al., 2017), TDLSTM (Tang et al., 2016) (without target words in the left and right LSTM), and a non-targetaware LSTM method used as the baseline in Tang et al. (2016). These methods represent state-of-the-art within TDSA, with only small differences in performance between TDLSTM, IAN, and ATAE (see figure 3). All the models are re-implemented in PyTorch (Paszke et al., 2017) using AllenNLP (Gardner et al., 2018). To ensure the only difference between the models is their network architecture the models use the same optimiser settings and the same regularisation. All words are lower cased and we use the same Glove common crawl 840B token 300 dimension word embedding (Pennington et al., 2014). We use variational (Gal and Ghahramani, 2016) and regular (Hinton et al., 2012) dropout for regularisation and an ADAM (Kingma and Ba, 2014) optimiser with standard settings, a batch size of 32 and use at most 100 epochs (with early stopping on a validation set). Many of these settings are not the same as originally implemented, however, having the same training setup is required for fair comparison (this explains the differences between our results and the original implementations). To increase the difficulty of our model selection problem, we additionally create four extra models by reducing the dimensions of the Glove vectors to 50 and removing dropout. Although these models are clearly not state-of-the-art, they increase the size of our candidate model set and so provide a more complicated model selection problem (an intuition discussed in Appendix A). All of the TDSA experiments are conducted on the well-studied SemEval 2014 task 4 Restaurant dataset (Pontiki et al., 2014) and we force trainval-test splits to follow the same ratios as this dataset’s official train-test split. Each individual model evaluation is then made on a randomly generated train-test split and random seed to access both sources of evaluation variability. Figure 3: F1 scores for our candidate TDSA models. After 500 evaluations of each model on different data splits and model seeds we see that the TDLSTM is the state-of-the-art model. 5.1 Fixed Budget Model Selection We use the TDSA model selection problem to test fixed budget model selection. To thoroughly test our algorithm, we consider an additional four models based on 200 dimensional Glove vectors, bringing the total number of models to 12. We compare our approach of sequential halving to the standard non-adaptive approach of splitting the available computational budget equally between the 12 candidate models. For example, we would allocate a budget of 24 model evaluations as evaluating each model two times and selecting the model with the highest sample mean. Figure 4 compares the proportion of 10, 000 runs of sequential halving that correctly identify the optimal model with the proportion identified by the non-adaptive approach with the same computational budget. Sequential halving identifies the optimal model more reliably (≈15% more often) than the current approach to FB model selection in NLP. Using sequential halving with 204 evaluations almost always (99% of runs) selects the optimal model, whereas the non-adaptive approach is only correct 85% of the time. 5.2 Fixed Confidence Model Selection We perform fixed confidence model selection on the eight TDSA candidate models (the full models and those based on 50 dimensional vectors). We compare TTTS to a non-adaptive approach where all models are evaluated at each step, irrespective of the results of earlier evaluations (the standard 2927 # evaluations with Non-Adaptive # evaluations with TTTS δ min mean max % correctly selected min mean max % correctly selected 0.05 48 281 1552 100 27 130 518 100 0.1 40 206 1192 99 24 96 460 99 0.2 32 128 608 96 24 65 274 97 Table 2: Number of evaluations required to select a TDSA model at a range of confidence levels across 500 runs of TTTS and a standard non-adaptive approach. Figure 4: Proportion of the runs correctly selecting the optimal TDSA model using sequential halving against the standard non-adaptive approach. Sequential halving consistently identifies the optimal model at a significantly higher rate across a wide range of budgets. approach for model selection in NLP). We run this non-adaptive approach until we reach the required confidence level calculated using the same Bayesian framework as in TTTS. We run each approach 500 times and note the number evaluations required to get to a range of confidence levels (Table 2) alongside the proportion that correctly identify the optimal model. TTTS requires substantially less model evaluations (in terms of the minimum, mean and max across our runs) to reach a given confidence level than the non-adaptive approach, achieving the same reliability at half the cost (on average). TTTS is able to quickly identify sub-optimal models and so can avoid wasting resources repeatedly evaluating the whole candidate set. Finally, we test our proposed approach to batch FC model selection by running exactly the same experiment but using BTS to choose collections of four and eight models at a time (Table 3). As expected, performance degrades as we increase batch size, with batches of four allowing more fine grained control over model evaluations than using batches of eight. In particular, due to the exploitative nature of Thompson sampling, we see that selecting models to a very high confidence (95%) requires more computation with BTS than the standard non-adaptive approach. However, BTS does reach the other confidence levels faster and correctly identifies the optimal model more often. However, as TTTS performs significantly better across all confidence levels, we emphasise the need for a less-exploitative version of BTS with adjustments similar to those used in TTTS. 6 Conclusions The aim of this paper has been to propose three algorithms for model selection in NLP, providing efficient and reliable selection for two distinct realistic scenarios: fixed confidence and fixed budget model selection. Crucially, our research further calls into question the current practice in NLP evaluation as used in the literature and international competitions such as SemEval. Our algorithms adaptively allocate resources to evaluate promising models, basing evaluations across multiple random seeds and train-test splits. We demonstrate that this allows significant computational savings and improves reliability over current model selection approaches. Although we have demonstrated that our algorithms perform well on a complex model selection problem typical of NLP, there is still work to be done to create algorithms more suited to these problems. Future research directions include making selection routines more robust to evaluation outliers, relaxing our Gaussian assumptions and developing more effective batch strategies. 7 Acknowledgements The authors are grateful to reviewers, whose comments and advice have greatly improved this paper. The research was supported by an EPSRC 2928 # evaluations with BTS-4 # evaluations with BTS-8 δ min mean max % correctly selected min mean max % correctly selected 0.05 28 282 1392 100 88 315 1128 100 0.1 24 144 520 100 56 178 784 100 0.2 24 76 280 98 32 106 352 99 Table 3: Number of evaluations of required to select a TDSA model at a range of confidence levels across 500 runs of BTS selecting batches of 4 and 8 models. Doctoral Training Grant and the STOR-i Centre for Doctoral Training. We thank Dr Chris Jewell at the Centre for Health Informatics, Computing, and Statistics, Lancaster University for the loan of a NVIDIA GP100-equipped workstation for this study. References Andrea Arcuri and Lionel Briand. 2014. A hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering. Software Testing, Verification and Reliability, 24(3):219–250. Jean-Yves Audibert and S´ebastien Bubeck. 2010. Best arm identification in multi-armed bandits. In COLT - 23th Conference on Learning Theory - 2010, pages 13–p. James O Berger. 2013. Statistical decision theory and Bayesian analysis. Springer Science & Business Media. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392. Association for Computational Linguistics. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Eyal Even-Dar, Shie Mannor, and Yishay Mansour. 2002. Pac bounds for multi-armed bandit and markov decision processes. In International Conference on Computational Learning Theory, pages 255–270. Springer. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. 2001. The elements of statistical learning. Springer series in statistics New York, NY, USA:. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1019–1027. Curran Associates, Inc. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6. Association for Computational Linguistics. Aur´elien Garivier and Emilie Kaufmann. 2016. Optimal best arm identification with fixed confidence. In Conference on Learning Theory, pages 998–1027. Gholamreza Haffari, Tuan Dung Tran, and Mark Carman. 2017. Efficient benchmarking of nlp apis using multi-armed bandits. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 408–416. Association for Computational Linguistics. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Junya Honda and Akimichi Takemura. 2014. Optimality of thompson sampling for gaussian bandits depends on priors. In Artificial Intelligence and Statistics, pages 375–383. Kevin Jamieson, Matthew Malloy, Robert Nowak, and Sebastien Bubeck. 2013. On finding the largest mean among many. arXiv preprint arXiv:1306.3917. Kevin Jamieson, Matthew Malloy, Robert Nowak, and S´ebastien Bubeck. 2014. lilucb: An optimal exploration algorithm for multi-armed bandits. In Conference on Learning Theory, pages 423–439. Kevin Jamieson and Robert Nowak. 2014. Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting. In 2014 48th Annual Conference on Information Sciences and Systems (CISS), pages 1–6. IEEE. Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, and Barnab´as P´oczos. 2018. Parallelised bayesian optimisation via thompson sampling. In International Conference on Artificial Intelligence and Statistics. 2929 Zohar Karnin, Tomer Koren, and Oren Somekh. 2013. Almost optimal exploration in multi-armed bandits. In International Conference on Machine Learning, pages 1238–1246. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ron Kohavi. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the 14th international joint conference on Artificial intelligence-Volume 2, pages 1137–1143. Morgan Kaufmann Publishers Inc. Tze Leung Lai and Herbert Robbins. 1985. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. Carolin Lawrence, Artem Sokolov, and Stefan Riezler. 2017. Counterfactual learning from bandit feedback under deterministic logging : A case study in statistical machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2566–2576. Association for Computational Linguistics. Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661–670. ACM. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4068–4074. AAAI Press. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Association for Computational Linguistics. Shie Mannor and John N Tsitsiklis. 2004. The sample complexity of exploration in the multi-armed bandit problem. Journal of Machine Learning Research, 5(Jun):623–648. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Henry Moss, David Leslie, and Paul Rayson. 2018. Using j-k-fold cross validation to reduce variance when tuning nlp models. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2978–2989. Association for Computational Linguistics. Khanh Nguyen, Hal Daum´e III, and Jordan BoydGraber. 2017. Reinforcement learning for bandit neural machine translation with simulated human feedback. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1464–1474. Association for Computational Linguistics. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338–348. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2018. Why comparing single performance scores does not allow to draw conclusions about machine learning approaches. arXiv preprint arXiv:1803.09578. Daniel Russo. 2016. Simple bayesian algorithms for best arm identification. In Conference on Learning Theory, pages 1417–1418. Daniel J Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, Zheng Wen, et al. 2018. A tutorial on thompson sampling. Foundations and Trends in Machine Learning, 11(1):1–96. SemEval. 2018. Proceedings of The 12th International Workshop on Semantic Evaluation. Association for Computational Linguistics, New Orleans, Louisiana. 2930 Artem Sokolov, Julia Kreutzer, Christopher Lo, and Stefan Riezler. 2016. Learning structured predictors from bandit feedback for interactive nlp. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1610–1620. Association for Computational Linguistics. Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016. Effective lstms for target-dependent sentiment classification. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3298– 3307. The COLING 2016 Organizing Committee. Sof´ıa S Villar, Jack Bowden, and James Wason. 2015. Multi-armed bandit models for the optimal design of clinical trials: benefits and challenges. Statistical science: a review journal of the Institute of Mathematical Statistics, 30(2):199. Yequan Wang, Minlie Huang, xiaoyan zhu, and Li Zhao. 2016. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606–615. Association for Computational Linguistics. Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879–3889. Association for Computational Linguistics. 8 Appendix A Characterising the Difficulty of a Model Selection Problem We briefly summarise a result from the best-arm identification literature, providing intuition for our experiment section by providing a mechanism to characterise the difficulty of a model selection problem. Intuitively, model selection difficulty increases with the size of the set of candidate models N and as the performance of sub-optimal models approaches that of the optimal model (and becomes harder to distinguish), i.e. as µm∗−µm gets small for some sub-optimal arm m. In fact, it is well known in the MAB literature that it is exactly these two properties that characterise the complexity of a best-arm-identification problem, confirming our intuition for model selection. Mannor and Tsitsiklis (2004) show that the number of arm pulls required for the identification of a best arm at a confidence level 1 −δ has at least a computational complexity of O(H log(1/δ)), where H = X m′∈S\{m∗} 1 (µm∗−µm)2 .
2019
281
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931–2951 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2931 Is Attention Interpretable? Sofia Serrano∗ Noah A. Smith∗† ∗Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA †Allen Institute for Artificial Intelligence, Seattle, WA, USA {sofias6,nasmith}@cs.washington.edu Abstract Attention mechanisms have recently boosted performance on a range of NLP tasks. Because attention layers explicitly weight input components’ representations, it is also often assumed that attention can be used to identify information that models found important (e.g., specific contextualized word tokens). We test whether that assumption holds by manipulating attention weights in already-trained text classification models and analyzing the resulting differences in their predictions. While we observe some ways in which higher attention weights correlate with greater impact on model predictions, we also find many ways in which this does not hold, i.e., where gradient-based rankings of attention weights better predict their effects than their magnitudes. We conclude that while attention noisily predicts input components’ overall importance to a model, it is by no means a fail-safe indicator.1 1 Introduction Interpretability is a pressing concern for many current NLP models. As they become increasingly complex and learn decision-making functions from data, ensuring our ability to understand why a particular decision occurred is critical. Part of that development has been the incorporation of attention mechanisms (Bahdanau et al., 2015) into models for a variety of tasks. For many different problems—to name a few, machine translation (Luong et al., 2015), syntactic parsing (Vinyals et al., 2015), reading comprehension (Hermann et al., 2015), and language modeling (Liu and Lapata, 2018)—incorporating attention mechanisms into models has proven beneficial for performance. While there are many variants of attention (Vaswani et al., 2017), each for1Code is available at https://github.com/ serrano-s/attn-tests. mulation consists of the same high-level goal: calculating nonnegative weights for each input component (e.g., word) that together sum to 1, multiplying those weights by their corresponding representations, and summing the resulting vectors into a single fixed-length representation. Since attention calculates a distribution over inputs, prior work has used attention as a tool for interpretation of model decisions (Wang et al., 2016; Lee et al., 2017; Lin et al., 2017; Ghaeini et al., 2018). The existence of so much work on visualizing attention weights is a testament to attention’s popularity in this regard; to name just a few examples of these weights being examined to understand a model, recent work has focused on goals from explaining and debugging the current system’s decision (Lee et al., 2017; Ding et al., 2017) to distilling important traits of a dataset (Yang et al., 2017; Habernal et al., 2018). Despite this, existing work on interpretability is only beginning to assess what computed attention weights actually communicate. In an independent and contemporaneous study, Jain and Wallace (2019) explore whether attention mechanisms can identify the relative importance of inputs to the full model, finding them to be highly inconsistent predictors. In this work, we apply a different analysis based on intermediate representation erasure to assess whether attention weights can instead be relied upon to explain the relative importance of the inputs to the attention layer itself. We find similar cause for concern: attention weights are only noisy predictors of even intermediate components’ importance, and should not be treated as justification for a decision. 2 Testing for Informative Interpretability We focus on five- and ten-class text classification models incorporating attention, as explaining the 2932 Figure 1: Our method for calculating the importance of representations corresponding to zeroed-out attention weights, in a hypothetical setting with four output classes . reasons for text classification has been a particular area of interest for recent work in interpretability (Yang et al., 2016; Ribeiro et al., 2016; Lei et al., 2016; Feng et al., 2018). In order for a model to be interpretable, it must not only suggest explanations that make sense to people, but also ensure that those explanations accurately represent the true reasons for the model’s decision. Note that this type of analysis does not rely on the true labels of the data; if a model produces an incorrect output, but a faithful explanation for which factors were important in that calculation, we still consider it interpretable. We take the implied explanation provided by visualizing attention weights to be a ranking of importance of the attention layer’s input representations, which we denote I: if the attention allocated to item i ∈I is higher than that allocated to item j ∈I, then i is presumed “more important” than j to the model’s output. In this work, we focus on whether the attention weights’ suggested importance ranking of I faithfully describes why the model produced its output, echoing existing work on explanation brittleness for other model components (Ghorbani et al., 2017). 2.1 Intermediate Representation Erasure We are interested in the impact of some contextualized inputs to an attention layer, I′ ⊂I, on the model’s output. To examine the importance of I′, we run the classification layer of the model twice (Figure 1): once without any modification, and once after renormalizing the attention distribution with I′’s attention weights zeroed out, similar to other erasure-based work (Li et al., 2016; Feng et al., 2018). We then observe the resulting effects on the model’s output. We erase at the attention layer to isolate the effects of the attention layer from the encoder preceding it. Our reasoning behind renormalizing is to keep the output document representation from artificially shrinking closer to 0 in a way never encountered during training, which could make subsequent measurements unrepresentative of the model’s behavior in spaces to which it does map inputs. One point worth noting is the facet of interpretability that our tests are designed to capture. By examining only how well attention represents the importance of intermediate quantities, which may themselves already have changed uninterpretably from the model’s inputs, we are testing for a relatively low level of interpretability. So far, other work looking at attention has examined whether attention suffices as a holistic explanation for a model’s decision (Jain and Wallace, 2019), which is a higher bar. We instead focus on the lowest standard of interpretability that attention might be expected to meet, ignoring prior model layers. We denote the output distributions (over labels) as p (the original) and qI′ (where we erase attention for I′). The question now becomes how to operationalize “importance” given p and qI′. There are many quantities that could arguably capture information about importance. We focus on two: the Jensen-Shannon (JS) divergence between output distributions p and qI′, and whether the argmaxes of p and qI′ differ, indicating a decision flip. 3 Data and Models We investigate four model architectures on a topic classification dataset (Yahoo Answers; Zhang et al., 2015) and on three review ratings datasets: IMDB (Diao et al., 2014),2 Yelp 2017,3 and Amazon (Zhang et al., 2015). Statistics for each dataset are listed in Table 1. Our model architectures are inspired by the hierarchical attention network (HAN; Yang et al., 2016), a text classification model with two layers of attention, first to the word tokens in each sentence and then to the resulting sentence representations. The layer that classifies the document representation is linear with a softmax at the end. We conduct our tests on the softmax formula2downloaded from github.com/nihalb/JMARS 3from www.yelp.com/dataset_challenge 2933 Dataset Av. # Words (s.d.) Av. # Sents. (s.d.) # Train. + Dev. # Test # Classes Yahoo Answers 104 (114) 6.2 (5.9) 1,400,000 50,000 10 IMDB 395 (259) 16.2 (10.7) 122,121 13,548 10 Amazon 73 (48) 4.3 (2.6) 3,000,000 650,000 5 Yelp 125 (109) 7.0 (5.6) 650,000 50,000 5 Table 1: Dataset statistics. Figure 2: Flat attention network (FLAN) demonstrating a convolutional encoder. Each contextualized word representation is the concatenation of two sizes of convolutions: one applied over the input representation and its two neighbors to either side, and the other applied over the input representation and its single neighbor to either side. For details, see Appendix A.1. tion of attention,4 which is used by most models, including the HAN. Specifically, we use the additive formulation originally defined in Bahdanau et al. (2015). Given attention layer ℓ’s learned parameters, element i of a sequence, and its encoded representation hi, the attention weight αi is computed using ℓ’s learned context vector cℓas follows: ui = tanh(Wℓhi + bℓ) αi = exp u⊤ i cℓ P i exp u⊤ i cℓ We evaluate on the original HAN architecture, but we also vary that architecture in two key ways: 1. Number of attention layers: besides exploring models with a final layer of attention over sentence representations (which we denote with a “HAN” prefix), we also train “flat” attention networks with only one layer of attention over all contextualized word tokens 4Alternatives such as sparse attention (Martins and Astudillo, 2016) and unnormalized attention (Ji and Smith, 2017) have been proposed. (which we denote with a “FLAN” prefix). In either case, though, we only run tests on models’ final layer of attention. 2. Reach of encoder contextualization: The original HAN uses recurrent encoders to contextualize input tokens prior to an attention layer (specifically, bidirectional GRUs running over the full sequence). Aside from biRNNs, we also experiment with models that instead contextualize word vectors by convolutions on only a token’s close neighbors, inspired by Kim (2014). See Figure 2 for a diagram of the FLAN architecture using a convolutional encoder. We denote this variant of an architecture with a “conv” suffix. Finally, we also test models that are trained with no contextualizing encoder at all; we denote these with a “noenc” suffix. The classification accuracy of each of our trained models is listed in Table 3 in the appendix, along with training details for the different models. 4 Single Attention Weights’ Importance As a starting point for our tests, we investigate the relative importance of attention weights when only one weight is removed. Let i∗∈I be the component with the highest attention and let αi∗be its attention. We compare i∗’s importance to some other attended item’s importance in two ways. 4.1 JS Divergence of Model Output Distributions We wish to compare how i∗’s impact on the model’s output distribution compares to the impact corresponding to a random attended item r drawn uniformly from I. Our first approach to this will be to calculate two JS divergences—one being the JS divergence of the model’s original output distribution from its output distribution after removing only i∗, and the other after removing only r—and compare them to each other. We subtract 2934 the output JS divergence after removing r from the output JS divergence after removing i∗: ∆JS = JS(p, q{i∗}) −JS(p, q{r}) (1) We plot this quantity against the difference ∆α = αi∗−αr in Figure 3. We show results on the HANrnn, as the trends for the other models are very similar; see Figures 7–8 and the tables in Figure 9 in the Appendix for full results. Figure 3: Difference in attention weight magnitudes versus ∆JS for HANrnns, comparable to results for the other architectures; for their plots, see Appendix A.2. Figure 4: These are the counts of test instances for the HANrnn models for which i∗’s JS divergence was smaller, binned by ∆α. These counts comprise a small fraction of the test set sizes listed in Table 1. Intuitively, if i∗is truly the most important, then we would expect Eq. 1 to be positive, and that is what we find the vast majority of the time. In addition, examining Figure 3, we see that nearly all negative ∆JS values are close to 0. By binning occurrences of negative ∆JS values by the difference between αi∗and αr in Figure 4, we also see that in the cases where i∗had a smaller effect, the gap between i∗’s attention and r’s tends to be small. This is encouraging, indicating that in these cases, i∗and r are nearly “tied” in attention. However, the picture of attention’s interpretability grows somewhat more murky when we begin to consider the magnitudes of positive ∆JS values in Figure 3. We notice across datasets that even for quite large differences in attention weights like 0.4, many of the positive ∆JS are still quite close to zero. Although we do finally see an upward swing in ∆JS values once ∆α gets even larger, indicating only one very high attention weight in the distribution, this still leaves many open questions about exactly how much difference in impact i∗and r can typically be expected to have. 4.2 Decision Flips Caused by Zeroing Attention Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.5 8.7 Yes 2.2 12.2 No 1.3 89.6 No 1.4 84.2 Amazon Yelp Yes No Yes No Yes 2.7 7.6 Yes 1.5 8.9 Remove i∗: Decision flip? No 2.7 87.1 No 1.9 87.7 Table 2: Percent of test instances in each decision-flip indicator variable category for each HANrnn. Since attention weights are often interpreted as an explanation for a model’s argmax decision, our second test looks at another, more immediately visible change in model outputs: decision flips. For clarity, we limit our discussion to results for the HANrnns, which reflect the same patterns observed for the other architectures. (Results for all other models are in Appendix A.2.) Table 2 shows, for each dataset, a contingency table for the two binary random variables (i) does zeroing αi∗(and renormalizing) result in a decision flip? and (ii) does doing the same for a different randomly chosen weight αr result in a decision flip? To assess the comparative importance of i∗and r, we consider when exactly one erasure changes the decision (off-diagonal cells). For 2935 attention to be interpretable, the blue, upper-right values (i∗, not r, flips a decision) should be much larger than the orange, lower-left values (r, not i∗, flips a decision), which should be close to zero.5 Although for some datasets in Table 2, the “orange” values are non-negligible, we mostly see that their fraction of total off-diagonal values mirrors the fraction of negative occurrences of Eq. 1 in Figure 4. However, it’s somewhat startling that in the vast majority of cases, erasing i∗does not change the decision (“no” row of each table). This is likely explained in part by the signal pertinent to the classification being distributed across a document (e.g., a “Sports” question in the Yahoo Answers dataset could signal “sports” in a few sentences, any one of which suffices to correctly categorize it). However, given that these results are for the HAN models, which typically compute attention over ten or fewer sentences, this is surprising. Altogether, examining importance from a single-weight angle paints a tentatively positive picture of attention’s interpretability, but also raises several questions about the many cases where the difference in impacts between i∗and r is almost identical (i.e., ∆JS values close to 0 or the many cases where neither i∗nor r cause a decision flip). To answer these questions, we require tests with a broader scope. 5 Importance of Sets of Attention Weights Often, we care about determining the collective importance of a set of components I′. To address that aspect of attention’s interpretability and close gaps left by single-weight tests, we introduce tests to determine how multiple attention weights perform together as importance predictors. 5.1 Multi-Weight Tests For a hypothesized ranking of importance, such as that implied by attention weights, we would expect the items at the top of that ranking to function as a concise explanation for the model’s decision. The less concise these explanations get, and the farther down the ranking that the items truly driving the model’s decision fall, the less likely it becomes for that ranking to truly describe importance. In other words, we expect that the top items 5We see this pattern especially strongly for FLANs (see Appendix), which is unsurprising since I is all words in the input text, so most attention weights are very small. in a truly useful ranking of importance would comprise a minimal necessary set of information for making the model’s decision. The idea of a minimal set of inputs necessary to uphold a decision is not new; Li et al. (2016) use reinforcement learning to attempt to construct such a minimal set of words, Lei et al. (2016) train an encoder to constrain the input prior to classification, and much of the work that has been done on extractive summarization takes this concept as a starting point (Lin and Bilmes, 2011). However, such work has focused on approximating minimal sets, instead of evaluating the ability of other importance-determining “shortcuts” (such as attention weight orderings) to identify them. Nguyen (2018) leveraged the idea of minimal sets in a much more similar way to our work, comparing different input importance orderings. Concretely, to assess the validity of an importance ranking method (e.g., attention), we begin erasing representations from the top of the ranking downward until the model’s decision changes. Ideally, we would then enumerate all possible subsets of that instance’s components, observe whether the model’s decision changed in response to removing each subset, and then report whether the size of the minimal decision-flipping subset was equal to the number of items that had needed to be removed to achieve a decision flip by following the ranking. However, the exponential number of subsets for any given instance’s sequence of components (word or sentence representations, in our case) makes such a strategy computationally prohibitive, and so we adopt a different approach. Instead, in addition to our hypothesized importance ranking (attention weights), we consider alternative rankings of importance; if, using those, we repeatedly discover cases where removing a smaller subset of items would have sufficed to change the decision, this signals that our candidate ranking is a poor indicator of importance. 5.2 Alternative Importance Rankings Exhaustively searching the space of component subsets would be far too time-consuming in practice, so we introduce three other ranking schemes. The first is to randomly rank importance. We expect that this ranking will perform quite poorly, but it provides a point of comparison by which to validate that ranking by descending attention weights is at least somewhat informative. 2936 The second ranking scheme, inspired by Li et al. (2015) and Feng et al. (2018), is to order the attention weights by the gradient of the decision function with respect to each calculated attention weight, in descending order. Since each of the datasets on which we evaluate has either five or ten output classes, we take the decision function given a real-valued model output vector to be d(x) = exp (maxi (xi)) P i exp xi . Unlike the last two proposed rankings, our third ranking scheme uses attention weights, but supplements them with information about the gradient. For this ranking, we multiply each of our calculated gradients from our previous proposed ranking scheme by their corresponding attention weight magnitude. Under this ordering, attended items that have both a high attention weight and a high calculated gradient with respect to their attention weight will be ranked most important. We introduce these last two rankings as an attempt to discover smaller sets not produced by the attention weight ranking. Note, however, that we still do not take either as a gold-standard indicator of importance to the model, as with the gradient in Ross et al. (2017) and Melis and Jaakkola (2018), but merely as an alternative ordering method. The “gold standard” in our case would be the minimal set of attention weights to zero out for the decision to change, which none of our ordering methods will necessarily find for a particular instance. 5.3 Instances Excluded from Analysis In cases where removing all but one input to the attention layer still does not produce a decision flip, we finish the process of removing components by removing the final representation and replacing the output of the attention layer with an arbitrary vector; we use the zero vector for our tests. Even so, since every real-valued vector output by the attention layer is mapped to an output distribution, removing this final item will still not change the classification decision for instances that the model happened to originally map to that same class. We exclude such instances for which the decision never changed from all subsequent analyses. We also set aside any test instances with a sequence length of 1 for their final attention layer, as there is only one possible ordering for such cases. 5.4 Attention Does Not Optimally Describe Model Decisions Examining our results in Figure 5, we immediately see that ranking importance by descending attention weights is not optimal for our models with encoders. While removing intermediate representations in decreasing order by attention weights often leads to a decision flip faster than a random ranking, it also clearly falls short of matching (or even approaching) the decision-flipping efficiency of either the gradient ordering or gradientattention-product ordering in many cases. In addition, although the product-based ranking often (but not always) requires slightly fewer removed items than the gradient ranking, we see that the purely gradient-based ranking ignoring attention magnitudes comes quite close to it, far outperforming purely attention-based orderings. For ten of our 16 models with encoders, removing by gradient found a smaller decision-flipping set of items than attention for over 50% of instances in that model’s test set, with that percentage often being much higher. In fact, for every model with an encoder that we tested, there were at least 1.6 times as many test instances where the purely gradientbased ranking managed a decision flip faster than the attention-based ranking than vice versa. We do not claim that ranking importance by either descending gradients or descending gradientattention products is optimal, but in many cases they discover much smaller decision-flipping sets of items than attention weights. Therefore, ranking representations in descending order by attention weight clearly fails to uncover a minimal set of decision-flipping information much of the time, which is a warning sign that we should be skeptical of trusting groups of attention weight magnitudes as importance indicators. 5.5 Decision Flips Often Occur Late For all ordering schemes we tried, we were struck by the large fraction of items that had to be removed to achieve a decision flip in many models. This is slightly less surprising for the HANs, as they compute attention over shorter sequences of sentences (see Table 1). For the FLAN models, though, this result is highly unexpected. The sequences across which FLANs compute attention are usually hundreds of tokens in length, meaning most attention weights will likely be minuscule. The distributions of tokens removed by our dif2937 Figure 5: The distribution of fractions of items removed before first decision flips on three model architectures under different ranking schemes. Boxplot whiskers represent the highest/lowest data point within 1.5 IQR of the higher/lower quartile, and dataset names at the bottom apply to their whole column. In several of the plots, the median or lower quartile aren’t visible; in these cases, the median/lower quartile is either 1 or very close to 1. ferent orderings that we see for the FLANrnns in Figure 5 are therefore remarkably high, especially given that all of our classification tasks have at least five output classes. We also note that due to the exponential nature of the softmax, softmax attention distributions typically contain only a few high-weighted items before the calculated weights become quite small, which can be misleading. In many cases, flipping the model’s original decision requires digging deep into the small attention weights, with the high-weighted components not actually being the reason for the decision. For several of our models, especially the FLANs (which typically compute attention over hundreds of tokens), this fact is concerning from an explainability perspective. Lipton (2016) describes a model as “transparent” if “a person can contemplate the entire model at once.” Applying this insight to the explanations suggested by attention, if an explanation rests on simultaneously considering hundreds of attended tokens necessary for a decision– even if that set were minimal—that would still raise serious transparency concerns. 5.6 Effects of Contextualization Scope on Attention’s Interpretability One last question we consider is whether the large number of items that are removed before decision flips can be explained in part by the scope of each model’s contextualization. In machine translation, prior work has observed that recurrent encoders over a full sequence can “shift” tokens’ signal in ways that cause subsequent attention layers to compute unintuitive off-by-one alignments (Koehn and Knowles, 2017). We hypothesize that in our text classification setting, the bidirectional recurrent structure of the HANrnn and FLANrnn encoders might instead be redistributing operative signal from a few informative input tokens across many others’ contextualized representations. Comparing the decision flip results for the FLANconvs in Figure 5 to those for the FLANrnns supports this theory. We notice decision flips happening much faster than for either of the RNN-based model architectures, indicating that the biRNN effectively does learn to widely redis2938 Figure 6: The distribution of fractions of items removed before decision flips on the encoderless model architectures under different ranking schemes. The Amazon FLANnoenc results have a very long tail; using the legend’s order of rankings, the percentage of test instances with a removed fraction above 0.50 for that model is 12.4%, 2.8%, 0.9%, and 0.5%, respectively. tribute the classification signal. In contrast, the convolutional encoders only allow contextualization with respect to an input token’s two neighbors to either side. We see similar results when comparing the two HAN architectures, albeit much more weakly (see Figure 10 in Appendix A.2); this is likely due to the smaller number of tokens being contextualized by the HANs (sentence representations instead of words), so that contextualization with respect to a token’s close neighbors encompasses a much larger fraction of the full sequence. We see this difference even more strongly when we compare to the encoderless model architectures, as shown in Figure 6. Compared to both other model architectures, we see the fraction of necessary items to erase for flipping the decision plummet. We also see random orderings mostly do better than before, indicating more brittle decision boundaries, especially on the Amazon dataset.6 In this situation, we see attention magnitudes generally indicate importance on par with (or better than) gradients, but that the product-based order6This is likely due to the fact that with no contextualization, the final attended representations are just a linear combination of the input embeddings, so the embeddings themselves are responsible for learning to directly encode a decision. Since Amazon has the largest ratio of documents (which probably vary in their label) to unique word embeddings by a factor of more than two times any other dataset’s, and the final attended representations in the FLANnoencs are unaggregated word embeddings, it stands to reason that the lack of encoders would be a much bigger obstacle in its case. ing is still often a more efficient explanation. While these differences themselves are not an argument against attention’s interpretability, they highlight the distinction between attention’s weighting of intermediate, contextualized representations and the model’s use of the original input tokens themselves. Our RNN-based models’ ability to maintain their original decision well past the point at which models using only local or no context have lost the signal driving their original decisions confirms that attention weights for a contextualized representation do not necessarily map neatly back to the original tokens. This might at least partly explain the striking near-indifference of the model’s decision to the contributions of particular contextualized representations in both our RNN-based models and in Jain and Wallace (2019), who also use recurrent encoders. However, the results from almost all models continue to support that ranking importance by attention is still not optimal; our non-random alternative rankings still uncover many cases where fewer items could be removed to achieve a decision flip than the attention weights imply. 6 Limitations There are important limitations to the work we describe here, perhaps the most important of which is our focus on text classification. By choosing to fo2939 cus on this task, we use the fact that decision flips are often not trivially achieved to ground our judgments of importance in model decision changes. However, for a task with a much larger output space (such as language modeling or machine translation) where almost anything might flip the decision, decision flips are likely too coarse a signal to identify meaningful differences. Determining an analogous informative threshold in changes to model outputs would be key to expanding this sort of analysis to other groups of models. A related limitation is our reliance in many of these tests on a fairly strict definition of importance tied to the output’s argmax; an alternative definition of importance might assert that the highest attention weights should identify the most influential representations in pushing towards any output class, not just the argmax. Two of the core challenges that would need to be solved to test for how well attention meets this relaxed criterion would be meaningfully evaluating a single attended item’s “importance” to multiple output classes for comparison to other attended items and, once again, determining what would truly indicate being “most influential” in the absence of decision flips as a guide to the output space. Also, while we explore several model architectures in this work, there exist other attention functions such as multi-headed and scaled dot-product (Vaswani et al., 2017), as well as cases where a single attention layer is responsible for producing more than one attended representation, such as in self-attention (Cheng et al., 2016). These variants could have different interpretability properties. Likewise, we only evaluate on final layers of attention here; in large models, lower-level layers of attention might learn to work differently. 7 Related and Future Work We have adopted an erasure-based approach to probing the interpretability of computed attention weights, but there are many other possible approaches. For example, recent work has focused on which training instances (Koh and Liang, 2017) or which human-interpretable features were most relevant for a particular decision (Ribeiro et al., 2016; Arras et al., 2016). Others have explored alternative ways of comparing the behavior of proposed explanation methods (Adebayo et al., 2018). Yet another line of work focuses on aligning models with human feedback for what is interpretable (Fyshe et al., 2015; Subramanian et al., 2017), which could refine our idea of what defines a highquality explanation derived from attention. Finally, another direction for future work would be to extend the importance-ranking comparisons that we deploy here for evaluation purposes into a method for deriving better, more informative rankings, which in turn could be useful for the development of new, more interpretable models. 8 Conclusion It is frequently assumed that attention is a tool for interpreting a model, but we find that attention does not necessarily correspond to importance. In some ways, the two correlate: comparing the highest attention weight to a lower weight, the high attention weight’s impact on the model is often larger. However, the picture becomes bleaker when we consider the many cases where the highest attention weights fail to have a large impact. Examining these cases through multi-weight tests, we see that attention weights often fail to identify the sets of representations most important to the model’s final decision. Even in cases when an attention-based importance ranking flips the model’s decision faster than an alternative ranking, the number of zeroed attended items is often too large to be helpful as an explanation. We also see a marked effect of the contextualization scope preceding the attention layer on the number of attended items affecting the model’s decision; while attention magnitudes do seem more helpful in uncontextualized cases, their lagging performance in retrieving decision rationales elsewhere is cause for concern. What is clear is that in the settings we have examined, attention is not an optimal method of identifying which attended elements are responsible for an output. Attention may yet be interpretable in other ways, but as an importance ranking, it fails to explain model decisions. Acknowledgments This research was supported in part by a grant from the Allstate Corporation; findings do not necessarily represent the views of the sponsor. We thank R. Andrew Kreek, Paul Koester, Kourtney Traina, and Rebecca Jones for early conversations leading to this work. We also thank Omer Levy, Jesse Dodge, Sarthak Jain, Byron Wallace, and Dan Weld for helpful conversations, and our anonymous reviewers for their feedback. 2940 References Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems. Leila Arras, Franziska Horn, Gr´egoire Montavon, Klaus-Robert M¨uller, and Wojciech Samek. 2016. Explaining predictions of non-linear classifiers in NLP. arXiv preprint arXiv:1606.07298. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Qiming Diao, Minghui Qiu, Chao-Yuan Wu, Alexander J. Smola, Jing Jiang, and Chong Wang. 2014. Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In Proceedings of the ACM SIGKDD International Conference on Knowledge Ciscovery and Data mining. Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretation difficult. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Alona Fyshe, Leila Wehbe, Partha P. Talukdar, Brian Murphy, and Tom M. Mitchell. 2015. A compositional and interpretable semantic space. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Reza Ghaeini, Xiaoli Z. Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language inference. arXiv preprint arXiv:1808.03894. Amirata Ghorbani, Abubakar Abid, and James Zou. 2017. Interpretation of Neural Networks is Fragile. arXiv preprint arXiv:1710.10547. Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. Before Name-calling: Dynamics and Triggers of Ad Hominem Fallacies in Web Argumentation. arXiv preprint arXiv:1802.06613. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Yangfeng Ji and Noah A. Smith. 2017. Neural discourse structure for text categorization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730. Jaesong Lee, Joong-Hwi Shin, and Jun-Seok Kim. 2017. Interactive Visualization and Manipulation of Attention-based Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 121–126. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2015. Visualizing and understanding neural models in NLP. arXiv preprint arXiv:1506.01066. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220. Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130. 2941 Zachary C Lipton. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490. Yang Liu and Mirella Lapata. 2018. Learning structured text representations. Transactions of the Association of Computational Linguistics, 6:63–75. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Andr´e Martins and Ram´on Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In International Conference on Machine Learning. David Alvarez Melis and Tommi Jaakkola. 2018. Towards robust interpretability with self-explaining neural networks. In Advances in Neural Information Processing Systems. Dong Nguyen. 2018. Comparing automatic and human evaluation of local explanations for text classification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data mining. Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of the International Joint Conference on Artificial Intelligence. Anant Subramanian, Danish Pruthi, Harsh Jhamtani, Taylor Berg-Kirkpatrick, and Eduard Hovy. 2017. SPINE: Sparse interpretable neural embeddings. arXiv preprint arXiv:1711.08792. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems. Yequan Wang, Minlie Huang, Li Zhao, et al. 2016. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606–615. Fan Yang, Arjun Mukherjee, and Eduard Dragut. 2017. Satirical news detection and analysis using attention mechanism and linguistic features. arXiv preprint arXiv:1709.01189. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems. 2942 A Appendices A.1 Model Hyperparameters and Performance We lowercased all tokens during preprocessing and used all hyperparameters specified in (Yang et al., 2016), except for those related to the optimization algorithm or, in the case of the convolutional or no-encoder models, the encoder. For each convolutional encoder, we trained two convolutions: one sweeping over five tokens, and one sweeping over three. As the output representation of token x, we then concatenated the outputs of the five-token and three-token convolutions centered on x. Unless otherwise noted, to train each model, we used Adam (Kingma and Ba, 2014) with gradient clipping of 10.0 and a patience value of 5, so we would stop training a model if five epochs elapsed without any improvement in validation set accuracy. In addition, for each model, we specified a learning rate for training, and dropout before each encoder layer (or attention layer, for the encoderless models) and also within the classification layer. For the HAN models, these are the values we used: • Yahoo Answers HANrnn, Yahoo Answers HANconv – Pre-sentence-encoder dropout: 0.4445 – Pre-document-encoder dropout: 0.2202 – Classification layer dropout: 0.3749 – Learning rate: 0.0004 • IMDB HANrnn – Pre-sentence-encoder dropout: 0.4445 – Pre-document-encoder dropout: 0.2202 – Classification layer dropout: 0.2457 – Learning rate: 0.0004 • Amazon HANrnn, Amazon HANconv – Pre-sentence-encoder dropout: 0.6 – Pre-document-encoder dropout: 0.2 – Classification layer dropout: 0.4 – Learning rate: 0.0002 • Amazon HANnoenc – Pre-sentence-encoder dropout: 0.6 – Pre-document-encoder dropout: 0.2 – Classification layer dropout: 0.4 – Learning rate: 0.0002 – Patience: 10 • Yelp HANrnn, Yelp HANconv – Pre-sentence-encoder dropout: 0.7 – Pre-document-encoder dropout: 0.1 – Classification layer dropout: 0.7 – Learning rate: 0.0001 • Yelp HANnoenc – Pre-sentence-encoder dropout: 0.7 – Pre-document-encoder dropout: 0.1 – Classification layer dropout: 0.7 – Learning rate: 0.0001 – Patience: 10 • Yahoo Answer HANnoenc – Pre-sentence-encoder dropout: 0.4445 – Pre-document-encoder dropout: 0.2202 – Classification layer dropout: 0.3749 – Learning rate: 0.0004 – Patience: 10 • IMDB HANconv – Pre-sentence-encoder dropout: 0.4445 – Pre-document-encoder dropout: 0.2202 – Classification layer dropout: 0.2457 – Learning rate: 0.0004 • IMDB HANnoenc – Pre-sentence-encoder dropout: 0.4445 – Pre-document-encoder dropout: 0.2202 – Classification layer dropout: 0.2457 – Learning rate: 0.0004 – Patience: 10 For the FLAN models, these are the values we used: • Yahoo Answers FLANrnn, Yahoo Answers FLANconv – Pre-document-encoder dropout: 0.4445 – Classification layer dropout: 0.4457 – Learning rate: 0.0004 • IMDB FLANrnn, IMDB FLANconv – Pre-document-encoder dropout: 0.4445 – Classification layer dropout: 0.3457 – Learning rate: 0.0004 2943 Dataset HANrnn HANconv HANnoenc FLANrnn FLANconv FLANnoenc Yahoo Answers 74.6 72.8 73.1 75.5 73.1 72.3 IMDB 50.3 48.9 46.1 49.1 48.2 45.4 Amazon 56.9 55.3 51.2 56.6 54.4 50.2 Yelp 63.0 61.0 58.6 62.3 60.7 58.2 Table 3: Classification accuracy of the different trained models on their respective test sets • Amazon FLANrnn, Amazon FLANconv – Pre-document-encoder dropout: 0.6 – Classification layer dropout: 0.4 – Learning rate: 0.0002 • Amazon FLANnoenc – Pre-document-encoder dropout: 0.6 – Classification layer dropout: 0.4 – Learning rate: 0.0002 – Patience: 10 • Yelp FLANrnn, Yelp FLANconv – Pre-document-encoder dropout: 0.7 – Classification layer dropout: 0.7 – Learning rate: 0.0001 • Yelp FLANnoenc – Pre-document-encoder dropout: 0.7 – Classification layer dropout: 0.7 – Learning rate: 0.0001 – Patience: 10 • Yahoo Answers FLANnoenc – Pre-document-encoder dropout: 0.4445 – Classification layer dropout: 0.4457 – Learning rate: 0.0004 – Patience: 10 • IMDB FLANnoenc – Pre-document-encoder dropout: 0.4445 – Classification layer dropout: 0.3457 – Learning rate: 0.0004 – Patience: 10 Trained model classification accuracies are reported in Table 3. We note that our IMDB data and Yelp data are different sets of reviews from those used by Yang et al. (2016), so our reported performances are not directly comparable to theirs. We were unable to reach a comparable performance for the Amazon dataset (and Yelp dataset, although different) to that in (Yang et al., 2016). We suspect that this is due to not pretraining the word2vec embeddings used by the model for long enough, combined with memory limitations on our hardware that necessitated decreasing our batch size in many cases. However, as noted in section 3, the analysis that we perform does not depend on model accuracy. It’s also worth noting that for the datasets for which we are able to get results that either pass or come close to the accuracies listed in the original HAN paper, the patterns we see in the results for the tests that we run are the same as the patterns that we see for the others. A.2 Full Sets of Plots Here we include the full sets of result plots for all models for all tests we describe in the paper, in order of appearance. In Figure 7, we see that the majority of ∆JS values continue to fall above 0, and that most are still close to 0. One point not stated in the paper, though, is that the upswing in ∆JS values as the difference between i∗’s weight and a randomly chosen weight increases tends to occur slightly earlier for models with less contextualization, implying that the improving efficiency of the attention-based ranking at flipping the decision as contextualization scope shrinks is also reflected in single-weight test results. Looking at where negative ∆JS values tend to occur in Figure 8, we once again see that they tend to cluster around cases where the difference between the highest and randomly chosen attention weights is close to 0. There are some exceptions, however; perhaps the most obvious are the fat tails of these counts for the Yahoo Answer HAN models. Considering the highest-attentionweight ranking of importance for all Yahoo Answers HAN models in Figure 10 struggle in flipping the decision quickly, it may be that attention is less helpful than usual in identifying importance 2944 in its case, which could explain this discrepancy. In Figure 9, we list contingency tables for all i∗versus-random single-weight decision-flip tests. We continue to see higher values overall in our blue cells than orange, as described in section 4.2. The most general change we notice across all the tables is that in the encoderless case, there are more test instances (often many more) where at least one of i∗or our random attended item flipped the decision than for any other architecture, except in the case of the Yahoo Answers FLAN. Thinking about why this might be, we recall that in the encoderless case, word embeddings are much more directly responsible for encoding a decision. Yahoo Answers is our only topic classification dataset, where keywords like “computer” or “basketball” might be much clearer indicators of a topic than, say, “like” or “love” would be indicators of a rating of 8 versus 9. This likely leads to much less certain decisions being encoded in the word embeddings of the non-Yahoo Answers datasets. For all other models, and in the case where potentially contradictory Yahoo Answers word embeddings are blended together before the final layer of attention (its HANnoenc), it is likely that decisions are simply more brittle overall. Finally, in Figure 10, we include the full set of fraction-removed distributions for the first decision flips reached under the different rankings we explored. A.3 Additional Tests Besides the tests we describe in the main paper, some of the other tests that we ran provide additional insights into our results. We briefly describe those here. In Figure 11, we provide the distributions of the original attention probability distributions that were zeroed at the point when different ranking schemes achieved their first decision flips. (Equivalently, these are the distributions of the sums of the zeroed attention weights described in Figure 10, only without repeated normalization.) We include these results to give a sense of which attention magnitudes the different rankings typically place towards the top. We notice that this probability mass required to change a decision is often quite high, which is unsurprising for the attentionbased ranking, given that it frequently requires removing many items to flip decisions and attention distributions tend to have just a few high weights. Besides that, the main takeaway that we see here is that for most models, the distribution of attention probability masses zeroed by our gradientbased ranking or our product-based ranking is often shifted down by around 0.25 or more compared to the corresponding attention probability mass distribution for the attention-based ranking, which is a fairly large difference. This would seem to imply that these alternative rankings (which usually flip decisions faster) tend to differ in relatively substantial ways from the rankings suggested by the pure attention weights, not just in the long tail of their orderings, which is another warning sign against attention’s interpretability. The final set of tests that we include in Figures 12 and 13 consist of rerunning our singleweight decision-flip tests on the single “most important” attention weights in their respective attention distributions as suggested by our alternative rankings (gradient-based and product-based rankings) instead of attention magnitudes. These results serve two functions: first, they imply still more information about when the top weight suggested by an alternative, faster-decision-flipping ranking differs from the top attention weight. Intuitively, if we observe large differences between the sum of the “yes” row for one contingency table and the “yes” rows for the other rankings’ tables on that same model, this is likely due to differences in the frequencies with which the highestranked items achieve a decision flip, indicating differences in highest-ranked items (“likely” because of the noise added by the random sampling). The second piece of information that these tests provide is a lower bound (via the sum of the “yes” rows) for the number of cases where rankings flip a decision as quickly as possible (i.e., in the first item). For context, the sum of the “yes” row is higher than the corresponding sum in Figure 9 for all contingency tables using our product-based ordering. For the gradient-based ordering, however, this sum is actually lower than for the attentionbased ranking’s tables in 14 out of our 24 models. This tells us that our gradient-based method often finds fewer single-item ways of flipping decisions than the attention-based ranking, so in order to achieve its more efficient overall distribution of flips that we see for many models in Figure 10, it must usually flip decisions faster than attention in cases where both its ranking and the attentionbased ranking require multiple removed weights. 2945 Figure 7: Differences in attention weight magnitude plotted against ∆JS for all datasets and architectures 2946 Figure 8: Counts of negatives ∆JS values grouped by the difference in their corresponding attention weights for all datasets and architectures. 2947 Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.5 8.7 Yes 2.2 12.2 No 1.3 89.6 No 1.4 84.2 Amazon Yelp Yes No Yes No Yes 2.7 7.6 Yes 1.5 8.9 Remove i∗: Decision flip? No 2.7 87.1 No 1.9 87.7 (a) HANrnns Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.3 9.0 Yes 0.1 14.5 No 0.3 90.3 No 0.1 85.3 Amazon Yelp Yes No Yes No Yes 0.6 7.3 Yes 0.3 8.0 Remove i∗: Decision flip? No 0.5 91.6 No 0.3 91.4 (b) FLANrnns Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 1.2 3.3 Yes 3.4 14.7 No 2.1 93.4 No 2.6 79.3 Amazon Yelp Yes No Yes No Yes 4.4 13.1 Yes 3.3 11.1 Remove i∗: Decision flip? No 5.4 77.0 No 4.0 81.6 (c) HANconvs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.4 6.8 Yes 0.2 17.3 No 0.8 91.9 No 0.3 82.2 Amazon Yelp Yes No Yes No Yes 1.3 6.8 Yes 0.8 12.1 Remove i∗: Decision flip? No 1.6 90.2 No 0.7 86.3 (d) FLANconvs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 2.5 18.2 Yes 6.7 34.7 No 3.7 75.7 No 3.2 55.4 Amazon Yelp Yes No Yes No Yes 13.8 25.8 Yes 8.4 18.0 Remove i∗: Decision flip? No 6.0 54.3 No 5.2 68.4 (e) HANnoencs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.7 5.8 Yes 0.3 27.6 No 1.1 92.4 No 0.3 71.8 Amazon Yelp Yes No Yes No Yes 2.5 24.8 Yes 1.4 14.1 Remove i∗: Decision flip? No 1.6 71.0 No 1.3 83.2 (f) FLANnoencs Figure 9: Using the definition of i∗given in section 4 (the highest-attention-weight attended item) and comparing to a different randomly selected attended item, these were the percentages of test instances that fell into each decision-flip indicator variable category for each of the four test sets on all models. Since we require our random item not to be i∗, we exclude any instances with a final sequence length of 1 (one sentence for the HANs, one word for the FLANs) from analysis. 2948 Figure 10: Distribution of fraction of attention weights that had to be removed by different ranking schemes to change each model architecture’s decisions for each of the four datasets. The different rankings (aside from “Attention”, which corresponds to the attention weight magnitudes in descending order) are described in section 5.2. 2949 Figure 11: Distribution of probability masses that had to be removed by different ranking schemes to change each model architecture’s decisions for each of the four datasets. While we do not discuss these in the paper due to space constraints, we notice that in most cases, a high fraction of the original attention distribution’s probability mass must be zeroed before the (renormalized) modified attended representation results in a changed decision using the Attention ranking. 2950 Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 1.6 7.9 Yes 2.8 13.7 No 0.2 90.3 No 0.6 82.9 Amazon Yelp Yes No Yes No Yes 4.9 10.2 Yes 3.0 10.0 Remove i∗ g: Decision flip? No 0.4 84.5 No 0.4 86.6 (a) HANrnns Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.5 7.2 Yes 0.1 10.9 No 0.2 92.2 No 0.1 88.9 Amazon Yelp Yes No Yes No Yes 0.9 7.1 Yes 0.4 7.4 Remove i∗ g: Decision flip? No 0.2 91.8 No 0.1 92.1 (b) FLANrnns Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 2.8 3.0 Yes 5.0 17.8 No 0.3 93.8 No 1.2 75.9 Amazon Yelp Yes No Yes No Yes 9.1 16.7 Yes 6.2 13.2 Remove i∗ g: Decision flip? No 0.8 73.5 No 1.0 79.6 (c) HANconvs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.9 12.7 Yes 0.4 16.6 No 0.3 86.1 No 0.2 82.8 Amazon Yelp Yes No Yes No Yes 2.0 15.4 Yes 1.2 11.3 Remove i∗ g: Decision flip? No 1.0 81.6 No 0.4 87.2 (d) FLANconvs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 5.3 15.7 Yes 7.4 30.5 No 0.8 78.2 No 2.9 59.3 Amazon Yelp Yes No Yes No Yes 18.1 32.0 Yes 11.8 26.2 Remove i∗ g: Decision flip? No 1.7 48.2 No 1.6 60.4 (e) HANnoencs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.7 4.1 Yes 0.3 11.1 No 1.1 94.1 No 0.4 88.2 Amazon Yelp Yes No Yes No Yes 2.9 17.0 Yes 1.2 6.2 Remove i∗ g: Decision flip? No 1.2 78.8 No 1.4 91.2 (f) FLANnoencs Figure 12: Let i∗ g be the highest-ranked attended item using our purely gradient-based ranking of importance described in section 5.2. We rerun our single-weight decision flip tests using this new i∗ g, comparing to a different randomly selected attended item. These were the percentages of test instances that fell into each decision-flip indicator variable category for each of the four test sets on all models. Since we require our random item not to be i∗ g, we exclude any instances with a final sequence length of 1 (one sentence for the HANs, one word for the FLANs) from analysis. 2951 Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 1.3 9.5 Yes 2.9 15.5 No 0.5 88.7 No 0.6 81.0 Amazon Yelp Yes No Yes No Yes 4.5 10.5 Yes 2.7 11.3 Remove i∗ p: Decision flip? No 0.8 84.2 No 0.8 85.3 (a) HANrnns Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 0.6 13.2 Yes 0.2 18.3 No 0.0 86.2 No 0.1 81.5 Amazon Yelp Yes No Yes No Yes 0.9 10.5 Yes 0.4 10.6 Remove i∗ p: Decision flip? No 0.1 88.5 No 0.1 88.9 (b) FLANrnns Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 2.8 5.1 Yes 5.0 20.5 No 0.4 91.7 No 0.9 73.6 Amazon Yelp Yes No Yes No Yes 8.8 18.2 Yes 6.3 16.0 Remove i∗ p: Decision flip? No 1.0 72.0 No 0.8 76.9 (c) HANconvs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 1.1 19.3 Yes 0.5 26.7 No 0.0 79.6 No 0.0 72.8 Amazon Yelp Yes No Yes No Yes 2.7 32.7 Yes 1.3 20.4 Remove i∗ p: Decision flip? No 0.2 64.3 No 0.2 78.1 (d) FLANconvs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 5.6 23.1 Yes 9.2 42.9 No 0.6 70.7 No 0.8 47.1 Amazon Yelp Yes No Yes No Yes 18.4 33.9 Yes 11.9 27.8 Remove i∗ p: Decision flip? No 1.5 46.2 No 1.6 58.7 (e) HANnoencs Remove random: Decision flip? Yahoo IMDB Yes No Yes No Yes 1.8 19.4 Yes 0.6 36.2 No 0.0 78.8 No 0.1 63.1 Amazon Yelp Yes No Yes No Yes 3.8 35.9 Yes 2.3 26.3 Remove i∗ p: Decision flip? No 0.3 60.0 No 0.3 71.1 (f) FLANnoencs Figure 13: Let i∗ p be the highest-ranked attended item using our attention-gradient product ranking of importance described in section 5.2. Once again, we rerun our single-weight decision flip tests using this new i∗ p, comparing to a different randomly selected attended item. These were the percentages of test instances that fell into each decision-flip indicator variable category for each of the four test sets on all models. Since we require our random item not to be i∗ p, we exclude any instances with a final sequence length of 1 (one sentence for the HANs, one word for the FLANs) from analysis.
2019
282
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2952–2962 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2952 Correlating neural and symbolic representations of language Grzegorz Chrupała Tilburg University [email protected] Afra Alishahi Tilburg University [email protected] Abstract Analysis methods which enable us to better understand the representations and functioning of neural models of language are increasingly needed as deep learning becomes the dominant approach in NLP. Here we present two methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which allow us to directly quantify how strongly the information encoded in neural activation patterns corresponds to information represented by symbolic structures such as syntax trees. We first validate our methods on the case of a simple synthetic language for arithmetic expressions with clearly defined syntax and semantics, and show that they exhibit the expected pattern of results. We then apply our methods to correlate neural representations of English sentences with their constituency parse trees. 1 Introduction Analysis methods which allow us to better understand the representations and functioning of neural models of language are increasingly needed as deep learning becomes the dominant approach to natural language processing. A popular technique for analyzing neural representations involves predicting information of interest from the activation patterns, typically using a simple predictive model such as a linear classifier or regressor. If the model is able to predict this information with high accuracy, the inference is that the neural representation encodes it. We refer to these as diagnostic models. One important limitation of this method of analysis is that it is only easily applicable to relatively simple types of target information, which are amenable to be predicted via linear regression or classification. Should we wish to decode activation patterns into a structured target such as a syntax tree, we would need to resort to complex structure prediction algorithms, running the risk that the analytic method becomes no simpler than the actual neural model. Here we introduce an alternative approach based on correlating neural representations of sentences and structured symbolic representations commonly used in linguistics. Crucially, the correlation is in similarity space rather than in the original representation space, removing most constraints on the types of representations we can use. Our approach is an extension of the Representational Similarity Analysis (RSA) method, initially introduced by Kriegeskorte et al. (2008) in the context of understanding neural activation patterns in human brains. In this work we propose to apply RSA to neural representations of strings from a language on one side, and to structured symbolic representations of these strings on the other side. To capture the similarities between these symbolic representations, we use a tree kernel, a metric to compute the proportion of common substructures between trees. This approach enables straightforward comparison of neural and symbolic-linguistic representations. Furthermore, we introduce RSAREGRESS, a similarity-based analytic method which combines features of RSA and of diagnostic models. We validate both techniques on neural models which process a synthetic language for arithmetic expressions with a simple syntax and semantics and show that they behave as expected in this controlled setting. We further apply our techniques to two neural models trained on English text, Infersent (Conneau et al., 2017) and BERT (Devlin et al., 2018), and show that both models encode a substantial amount of syntactic information compared to random models and simple bag-of-words representations; we also show that according to our metrics syntax is most salient in the intermediate layers of BERT. 2953 2 Related work 2.1 Analytic methods The dominance of deep learning models in NLP has brought an increasing interest in techniques to analyze these models and gain insight into how they encode linguistic information. For an overview of analysis techniques, see Belinkov and Glass (2019). The most widespread family of techniques are diagnostic models, which use the internal activations of neural networks trained on a particular task as input to another predictive model. The success of such a predictive model is then interpreted as evidence that the predicted information has been encoded by the original neural model. The approach has also been called auxiliary task (Adi et al., 2017), decoding (Alishahi et al., 2017), diagnostic classifier (Hupkes et al., 2018) or probing (Conneau et al., 2018). Diagnostic models have used a range of predictive tasks, but since their main purpose is to help us better understand the dynamics of a complex model, they themselves need to be kept simple and interpretable. This means that the predicted information in these techniques is typically limited to simple class labels or values, as opposed to symbolic, structured representations of interest to linguists such as syntactic trees. In order to work around this limitation Tenney et al. (2019) present a method for probing complex structures via a formulation named edge probing, where classifiers are trained to predict various lexical, syntactic and semantic relations between representation of word spans within a sentence. Another important consideration when analyzing neural encodings is the fact that a randomly initialized network will often show non-random activation patterns. The reason for this depends on each particular case, but may involve the dynamics of the network itself as well as features of the input data. For a discussion of this issue in the context of diagnostic models see Zhang and Bowman (2018). Alternative approaches have been proposed to analyzing neural models of language. For example, Saphra and Lopez (2019) train a language model and parallel recurrent models for POS, semantic and topic tagging, and measure the correlation between the neural representations of the language model and the taggers. Others modify the neural architecture itself to make it more interpretable: Croce et al. (2018) adapt layerwise relevance propagation (Bach et al., 2015) to Kernel-based Deep Architectures (Croce et al., 2017) in order to retrieve examples which motivate model decisions. A vector representation for a given structured symbolic input is built based on kernel evaluations between the input and a subset of training examples known as landmarks, and the network decision is then traced back to the landmarks which had most influence on it. In our work we also use kernels between symbolic structures, but rather than building a particular interpretable model we propose a general analytical framework. 2.2 Representation Similarity Analysis Kriegeskorte et al. (2008) present RSA as a variant of pattern-information analysis, to be applied for understanding neural activation patterns in human brains, for example syntactic computations (Tyler et al., 2013) or sensory cortical processing (Yamins and DiCarlo, 2016). The core idea is to find connections between data from neuroimaging, behavioral experiments and computational modeling by correlating representations of stimuli in each of these representation spaces via their pairwise (dis)similarities. RSA has also been used for measuring similarities between neuralnetwork representation spaces (e.g. Bouchacourt and Baroni, 2018; Chrupała, 2019). 2.3 Tree kernels For extending RSA to a structured representation space, we need a metric for measuring (dis)similarity between two structured representations. Kernels provide a suitable framework for this purpose: Collins and Duffy (2002) introduce convolutional kernels for syntactic parse trees as a metric which quantifies similarity between trees as the number of overlapping tree fragments between them, and introduce a polynomial time algorithm to compute these kernels; Moschitti (2006) propose an efficient algorithm for computing tree kernels in linear average running time. 2.4 Synthetic languages When developing techniques for analyzing neural network models of language, several studies have used synthetic data from artificial languages. Using synthetic language has the advantage that its structure is well-understood and the complexity of the language and the statistical characteristics of the generated data can be carefully con2954 trolled. The tradition goes back to the first generation of connectionist models of language (Elman, 1990; Hochreiter and Schmidhuber, 1997). More recently, Sennhauser and Berwick (2018) and Skachkova et al. (2018) both use contextfree grammars to generate data, and train RNNbased models to identify matching numbers of opening and closing brackets (so called Dyck languages). The task can be learned, but Sennhauser and Berwick (2018) report that the models fail to generalize to longer sentences. Paperno (2018) also show that with extensive training and the appropriate curriculum, LSTMs trained on synthetic language can learn compositional interpretation rules. Nested arithmetic languages are also appealing choices since they have an unambiguous hierarchical structure and a clear compositional semantic interpretation (i.e. the value of the arithmetic expression). Hupkes et al. (2018) train RNNs to calculate the value of such expressions and show that they perform and generalize well to unseen strings. They apply diagnostic classifiers to analyze the strategy employed by the RNN model. 3 Similarity-based analytical methods RSA finds connections between data from two different representation spaces. Specifically, for each representation type we compute a matrix of similarities between pairs of stimuli. Pairs of these matrices are then subject to second-order analysis by extracting their upper triangulars and computing a correlation coefficient between them. Thus for a set of objects X, given a similarity function sk for a representation k, the function Sk which computes the representational similarity matrix is defined as: Sk(X) = U Ui,j = sk(Xi, Xj), (1) and the RSA score between representations k and l for data X is the correlation (such as Pearson’s correlation coefficient r) between the upper triangulars Sk(X) and Sl(X), excluding the diagonals. Structured RSA We apply RSA to neural representations of strings from a language on one side, and to structured symbolic representations of these strings on the other side. The structural properties are captured by defining appropriate similarity functions for these symbolic representations; we use tree kernels for this purpose. A tree kernel measures the similarity between a pair of tree structures by computing the number of tree fragments they share. Collins and Duffy (2002) introduce an algorithm for efficiently computing this quantity; a tree fragment in their formulation is a set of connected nodes subject to the constraint that only complete production rules are included. Following Collins and Duffy (2002), we calculate the tree kernel between two trees T1 and T2 as: K(T1, T2) = X n1∈T1 X n2∈T2 C(n1, n2, λ), (2) where n1 and n2 are the complete sets of tree fragments in T1 and T2, respectively, and the function C(n1, n2, λ) is calculated as shown in figure 2. The parameter λ is used to scale the relative importance of tree fragments with their size. Lower values of this parameter discount larger tree fragments in the computation of the kernel; the value 1 does not do any discounting. See Figure 1 for the illustration of the effect of the value of λ on the kernel. Figure 1: Distribution of values of the tree kernel for two settings of discounting parameter λ, for syntax trees of a sample of English sentences. We work with normalized kernels: given a function K which computes the raw count of tree fragments in common between trees t1 and t2, the normalized tree kernel is defined as: K′(t1, t2) = K(t1, t2) p K(t1, t1)K(t2, t2) . (3) Figure 3 shows the complete set of tree fragments which the tree kernel implicitly computes for an example syntax tree. 2955 C(n1, n2, λ) =      0, if prod(n1) ̸= prod(n2) λ, if preterm(n1) ∧preterm(n2) λ Qnc(n1) i=1 (1 + C(ch(n1, i), ch(n2, i), λ)) otherwise. Figure 2: Dynamic programming formula for computing a convolution kernel, after Collins and Duffy (2002). Here nc(n) is the number of children of a given (sub)tree, and ch(n, i) is its ith child; prod(n) is the production of node n, and preterm(n) is true if n is a preterminal node. NP N apple D the NP N D the NP N apple D NP N D D the N apple Figure 3: The complete set of tree fragments as defined by the tree kernel for the syntax tree corresponding to the apple, after Collins and Duffy (2002). RSAREGRESS Basic RSA measures correlation between similarities in two different representations globally, i.e. how close they are in their totality. In contrast, diagnostic models answer a more specific question: to what extent a particular type of information can be extracted from a given representation. For example, while for a particular neural encoding of sentences it may be possible to predict the length of the sentence with high accuracy, the RSA between this representation and the strings represented only by their length may be relatively small in magnitude, since the neural representation may be encoding many other aspects of the input in addition to its length. We introduce RSAREGRESS, a method which shares features of both classic RSA as well as the diagnostic model approach. Like RSA it is based on two similarity functions sk and sl specific to two different representations k and l. But rather than computing the square matrices Sk(X) and Sl(X) for a set of objects X, we sample a reference set of objects R to act as anchor points, and then embed the objects of interest X in the representation space k via the representational similarity function σk defined as:1 σk(X, R) = V Vi,j = sk(Xi, Rj), (4) Likewise for representation l, we calculate σl for the same set of objects X. The rows of the two resulting matrices contain two different views of the objects of interest, where the dimensions of each view indicate the degree of similarity for a particular reference anchor point. We can now fit a multivariate linear regression model to map between the two views: d B, a = arg min B,a MSE(Bσk(X, R) + a, σl(X, R)) (5) where k is the source and l is the target view, and MSE is the mean squared error. The success of this model can be seen as an indication of how predictable representation l is from representation k. Specifically, we use a cross-validated Pearson’s correlation between predicted and true targets for an L2-penalized model. 4 Synthetic language Evaluation of analysis methods for neural network models is an open problem. One frequently resorts to largely qualitative evaluation: checking whether the conclusions reached via a particular approach have face validity and match pre-existing intuitions. However pre-existing intuitions are often not reliable when it comes to complex neural models applied to also very complex natural language data. It is helpful to simplify one part of the overall system and apply the analytic technique of interest on a neural model which processes a simple and well-understood synthetic language. As our first case study, we use a simple language of 1Note that σk is simply a generalization of Sk to the nonsquare case, namely Sk(X) = σk(X, X). 2956 Syntax Meaning E →L E1 O E2 R [E] = [O]([E1], [E2]) E →D [E] = [D] O →+ [O] = λx, y.x + y mod 10 O →− [O] = λx, y.x −y mod 10 L →( R →) D →0 [D] = 0 ... ... D →9 [D] = 9 Table 1: Grammar G(L) of a language L expressing addition and subtraction modulo 10 in infix notation. The notation [·] stands for the semantic evaluation function. Subscripts on symbols serve to distinguish their multiple occurrence. arithmetic expressions. Here we first describe the language and its syntax and semantics, and then introduce neural recurrent models which process these expressions. 4.1 Arithmetic expressions Our language consists of expressions which encode addition and subtraction modulo 10. Consider the example expression ((6+2)-(3+7)). In order to evaluate the whole expression, each parenthesized sub-expression is evaluated modulo 10: in this case the left sub-expression evaluates to 8, the right one to 0 and the whole expression to 8. Table 1 gives the context-free grammar which generates this language, and the rules for semantic evaluation. Figure 4 shows the syntax tree for the example expression according to this grammar. This language lacks ambiguity, has a small vocabulary (14 symbols) and simple semantics, while at the same time requiring the processing of hierarchical structure to evaluate its expressions.2 Generating expressions In order to generate expressions in L we use the recursive function GENERATE defined in Algorithm 1. The function receives two input parameters: the branching probability p and the decay factor d. In the recursive call to GENERATE in lines 4 and 5 the probability p is divided by the decay factor. Larger values of d lead to the generation of smaller expressions. Within the branching path in line 6 the operator is selected uniformly at random, and likewise in the non-branching path in line 9 the digit is sampled uniformly. 2The grammar is more complex than strictly needed in order to facilitate the computation of the Tree Kernel, which assumes each vocabulary symbol is expanded from a preE R ) E R ) E D 7 O + E D 3 L ( O E R ) E D 2 O + E D 6 L ( L ( Figure 4: Syntax tree of the expression ((6+2)-(3+7)). Algorithm 1 Recursive function for generating an expression of language L. 1: function GENERATE(p, d) 2: branch ∼BERNOULLI(p) 3: if branch then 4: e1 ←GENERATE(p/d, d) 5: e2 ←GENERATE(p/d, d) 6: op ∼UNIFORM([+, −]) 7: return [E [L ( ] e1 [O op ] e2 [R ) ] ] 8: else 9: digit ∼UNIFORM([0, . . . , 9]) 10: return [E [D digit ] ] 11: end if 12: end function 4.2 Neural models of arithmetic expressions We define three recurrent models which process the arithmetic expressions from language L. Each of them is trained to predict a different target, related either to the syntax of the language or to its semantics. We use these models as a testbed for validating our analytical approaches. All these models share the same recurrent encoder architecture, based on LSTM (Hochreiter and Schmidhuber, 1997). Encoder The encoder consists of a trainable embedding lookup table for the input symbols, and a single-layer LSTM. The state of the hidden layer of the LSTM at the last step in the sequence is used as a representation of the input expression. SEMANTIC EVALUATION This model consists of the encoder as described above, which passes its representation of the input to a multi-layer perceptron component with a single output neuron. It is trained to predict the value of the input expression, with mean squared error as the loss function. In order to perform this task we would expect that the model needs to encode the hierarchical structerminal node. 2957 ture of the expression to some extent while also encoding the result of actually carrying out the operations of semantic evaluation. TREE DEPTH This model is similar to SEMANTIC EVALUATION but is trained to predict the depth of the syntax tree corresponding to the expression instead of its value. We expect this model to need to encode a fair amount of hierarchical information, but it can completely ignore the semantics of the language, including the identity of the digit symbols. INFIX-TO-PREFIX This model uses the encoder to create a representation of the input expression, which it then decodes in its prefix form. For example, the expression ((6+2)-(3+7)) is converted to (-(+62)(+37)). The decoder is an LSTM trained as a conditional language model, i.e. its initial hidden state is the output of the encoder and its input at each step is the embedding of previous output symbol. The loss function is categorical cross-entropy. We would expect this model to encode the hierarchical structure in some form as well as the identity of the digit symbols, but it can ignore the compositional semantics of the language. 4.3 Reference representations We use RSA to correlate the neural encoders from Section 4.2 with reference syntactic and semantic information about the arithmetic expressions. For the neural representations we use cosine distance as the dissimilarity metric. The reference representations and their associated dissimilarity metrics are described below. Semantic value This is simply the value to which each expression evaluates, also used as the target of the SEMANTIC EVALUATION model. As a measure of dissimilarity we use the absolute difference between values, which ranges from 0 to 9. Tree depth This is the depth of the syntax tree for each expression, also used as the target of the TREE DEPTH model. We use the absolute difference as the dissimilarity measure. The dissimilarity is minimum 0 and has no upper bound, but in our data the typical maximum value is around 7. Tree kernel This is an estimate of similarity between two syntax trees based on the number of tree fragments they share, as described in Section 3. The normalized tree kernel metric ranges between 0 and 1, which we convert to dissimilarity by subtracting it from 1. The semantic value and tree depth correlates are easy to investigate with a variety of analytic methods including diagnostic models; we include them in our experiments as a point of comparison. We use the tree kernel representation to evaluate structured RSA for a simple synthetic language. 4.4 Experimental settings We implement the neural models in PyTorch 1.0.0. We use the following model architecture: encoder embedding layer size 64, encoder LSTM size 128, for the regression models, MLP with 1 hidden layer of size 256; for the sequence-to-sequence model the decoder hyper-parameters are the same as the encoder. The symbols are predicted via a linear projection layer from hidden state, followed by a softmax. Training proceeds following a curriculum: we first train on 100,000 batches of size 32 of random expressions sampled with decay d = 2.0, followed by 200,000 batches with d = 1.8 and finally 400,000 batches with d = 1.5. We optimize with Adam with learning rate 0.001. We report results on expressions sampled with d = 1.5. See Figure 5 for the distribution of expression sizes for these values of d. Figure 5: Distribution of expression sizes when varying the value of the decay parameter d. The size of an expression is measured as the number of its digit nodes. We report all results for two conditions: randomly initialized, and trained, in order to quantify the effect of learning on the activation patterns. The trained model is chosen by saving model weights during training every 10,000 batches and selecting the weights with the smallest loss on 1,000 held-out validation expressions. Results 2958 are reported on separate test data consisting of 2,000 expressions and 200 reference expressions for RSAREGRESS embedding. 4.5 Results Table 2 shows the results of our experiments, where each row shows a different encoder type and each column a different target task. Semantic value and tree depth As a first sanity check, we would like to see whether the RSA techniques show the same pattern captured by the diagnostic models. As expected, both diagnostic and RSA scores are the highest when the objective function used to train the encoder and the analytical reference representations match: for example, the SEMANTIC EVALUATION encoder scores high on the semantic value reference, both for the diagnostic model and the RSA. Furthermore, the scores for the value and depth reference representation according to the diagnostic model and according to RSAREGRESS are in agreement. The scores according to RSA in some cases show a different picture. This is expected, as RSA answers a substantially different question than the other two approaches: it looks at how the whole representations match in their similarity structure, whereas both the diagnostic model and RSAREGRESS focus on the part of the representation that encodes the target information the strongest. Tree Kernel We can use both RSA and RSAREGRESS for exploring whether the hidden activations encode any structural representation of syntax: this is evident in the scores yielded by the TK reference representations. As expected, the highest scores for both methods are gained when using INFIX-TO-PREFIX encodings, the task that relies the most on the hierarchical structure of an input string. RSAREGRESS yields the secondhighest score for TREE DEPTH encodings, which also depend on aspects of tree structure. The overall pattern for the TK with different values of the discounting parameter λ is similar, even though the absolute values of the scores vary. What is unexpected is the results for the random encoder, which we turn to next. Random encoders The non-random nature of the activation patterns of randomly initialized models (e.g., Zhang and Bowman, 2018) is also strongly in evidence in our results. For example the random encoder has quite a high score for Figure 6: Scatterplot of dissimilarity values according to random encoder or trained INFIX-TO-PREFIX encoder and the Tree Kernel (λ = 0.5) diagnostic regression on tree depth. Even more striking is the fact that the random encoder has substantial negative RSA score for the Tree Kernel: thus, expression pairs more similar according to the Tree Kernel are less similar according to the random encoder, and vice-versa. When applying RSA we can inspect the full correlation pattern via a scatter-plot of the dissimilarities in the reference and encoder representations. Figure 6 shows the data for the random encoder and the Tree Kernel representations. As can be seen, the negative correlation for the random encoder is due to the fact that according to the Tree Kernel, expression pairs tend to have high dissimilarities, while according to the random encoder’s activations they tend to have overall low dissimilarities. For the trained INFIX-TO-PREFIX encoder the dissimilarities are clearly positively correlated with the TK dissimilarities. Thus the raw correlation value for the trained encoder is a biased estimate of the effect of learning, as learning has to overcome the initially substantial negative correlation: a better estimate is the difference between scores for the learned and random model. It is worth noting that the same approach would be less informative for the diagnostic model approach or for RSAREGRESS. For a regression model the correlation scores will be positive, and when taking the difference between learned and random scores, they may cancel out, even though a particular information may be predictable from the random activations in a completely different way than from the learned activations. This is what we see for the RSAREGRESS scores for random vs. INFIX-TO-PREFIX encoder: 2959 Diagnostic RSA RSAREGRESS Encoder Loss Value Depth Value Depth TK(1) TK(0.5) Value Depth TK(1) TK(0.5) RANDOM 0.01 0.80 0.01 0.23 -0.24 -0.33 -0.01 0.57 0.41 0.63 SEMANTIC EVAL. 0.07 0.97 0.70 0.62 0.05 0.02 0.01 0.97 0.55 0.38 0.61 TREE DEPTH 0.00 -0.03 1.00 0.01 0.72 0.10 -0.06 -0.03 0.97 0.49 0.87 INFIX-TO-PREFIX 0.00 0.02 0.97 -0.00 0.64 0.35 0.53 0.02 0.88 0.58 0.96 Table 2: Scores for diagnostic regression, RSA, and RSAREGRESS with respect to expression value, expression tree depth and the Tree Kernel (TK) with λ = 1 and λ = 0.5. All scores are Pearson’s correlation coefficients. For the diagnostic model and RSAREGRESS they are cross-validated correlations between target and predicted values. The randomly initialized encoder is the same for all encoder types, and thus there is only a single row for the RANDOM encoder. The loss column shows the loss of the full model on the test data: mean squared error for SEMANTIC EVALUATION and TREE DEPTH, and cross-entropy for INFIX-TO-PREFIX. the scores partially cancel out, and given the pattern in Figure 6 it is clear that subtracting them is misleading. It is thus a good idea to complement the RSAREGRESS score with the plain RSA correlation score in order to obtain a full picture of how learning affects the neural representations. Overall, these results show that RSAREGRESS can be used to answer the same sort of questions as the diagnostic model. It has the added advantage of being also easily applicable to structured symbolic representations, while the RSA scores and the full RSA correlation pattern provides a complementary source of insight into neural representations. Encouraged by these findings, we next apply both RSA and RSAREGRESS to representations of natural language sentences. 5 Natural language Here we use our proposed RSA-based techniques to compare tree-structure representations of natural language sentences with their neural representations captured by sentence embeddings. Such embeddings are often provided by NLP systems trained on unlabeled text, using variants of a language modeling objective (e.g. Peters et al., 2018), next and previous sentence prediction (Kiros et al., 2015; Logeswaran and Lee, 2018), or discourse based objectives (Nie et al., 2017; Jernite et al., 2017). Alternatively they can be either fully trained or fine-tuned on annotated data using a task such as natural language inference (Conneau et al., 2017). In our experiments we use one of each type of encoders. 5.1 Encoders Bag of words As a baseline we use a classic bag of words model where a sentence is represented by a vector of word counts. We do not exclude any words and use raw, unweighted word counts. Infersent This is the supervised model described in Conneau et al. (2017) based on a bidirectional LSTM trained on natural language inference. We use the infersent2 model with pretrained fastText (Bojanowski et al., 2017) word embeddings.3 We also test a randomly initialized version of this model, including random word embeddings. BERT This is an unsupervised model based on the Transformer architecture (Vaswani et al., 2017) trained on a cloze-task and next-sentence prediction (Devlin et al., 2018). We use the Pytorch version of the large 24-layer model (bert-large-uncased).4 We also test a randomly initialized version of this model. 5.2 Experimental settings Data We use a sample of data from the English Web Treebank (EWT) (Bies et al., 2012) which contains a mix of English weblogs, newsgroups, email, reviews and question-answers manually annotated for syntactic constituency structure. We use the 2,002 sentences corresponding to the development section of the EWT Universal Dependencies (Silveira et al., 2014), plus 200 sentences from the training section as reference sentences when fitting RSAREGRESS. Tree Kernel Prior to computing the Tree Kernel scores we delexicalize the constituency trees by replacing all terminals (i.e. words) with a single placeholder value X. This ensures that only syntactic structure, and not lexical overlap, contributes to kernel scores. We compute kernels for the values of λ ∈{1, 1 2}. 3Available at https://github.com/facebookresearch/InferSent. 4Available at https://github.com/huggingface/pytorchpretrained-BERT. 2960 Encoder Train λ RSA RSAREGRESS BoW 0.5 0.18 0.50 Infersent − 0.5 0.24 0.51 BERT last − 0.5 0.12 0.49 BERT best − 0.5 0.14 0.53 Infersent + 0.5 0.30 0.71 BERT last + 0.5 0.16 0.59 BERT best + 0.5 0.32 0.70 BoW 1.0 -0.01 0.40 Infersent − 1.0 0.00 0.48 BERT last − 1.0 -0.08 0.50 BERT best − 1.0 -0.07 0.52 Infersent + 1.0 0.10 0.59 BERT last + 1.0 0.03 0.53 BERT best + 1.0 0.18 0.60 Table 3: Correlation scores for encoders against Tree Kernel with varying λ. Scores for both RSA and RSAREGRESS are Pearson’s r. The column Train indicates whether the encoder (including the word embeddings) is randomly initialized (−), or trained (+). For BERT, we report scores for the topmost (last) layer and for the layer which maximizes the given score (best). Embeddings For the BERT embeddings we use the vector associated with the first token (CLS) for a given layer. For Infersent, we use the default max-pooled representation. Fitting When fitting RSAREGRESS we use L2penalized multivariate linear regression. We report the results for the value of the penalty = 10n, for n ∈{−3, −2, −1, 0, 1, 2}, with the highest 10-fold cross-validated Pearson’s r between target and predicted similarity-embedded vectors. 5.3 Results Table 3 shows the results of applying RSA and RSAREGRESS on five different sentence encoders, using the Tree Kernel reference. Results are reported using two different values for the Tree Kernel parameter λ. As can be seen, with λ = 1 2, all the encoders show a substantial RSA correlation with the parse trees. The highest scores are achieved by the trained Infersent and BERT, but even Bag of Words and untrained versions of Infersent and BERT show a sizeable correlation with syntactic trees according to both RSA and RSAREGRESS. When structure matching is strict (λ = 1), only trained BERT and Infersent capture syntactic information according to RSA; however, RSAREGRESS still shows moderate correlation for BoW and the untrained versions of BERT and Infersent. Thus RSAREGRESS is less sensitive to the value of λ than RSA since changing it from 1 2 to 1 Figure 7: RSA and RSAREGRESS scores for embeddings from all the layers of BERT vs Tree Kernel for two values of λ. Both randomly initialized and trained versions of BERT are shown. The embeddings are vectors at the first token (CLS) at each layer. does not alter results in a qualitative sense. Figure 7 shows how RSA and RSAREGRESS scores change when correlating Tree Kernel estimates with embeddings from different layers of BERT. For trained models, scores peak between layers 15–22 (depending on metric and λ) and decline thereafter, which indicates that the final layers are increasingly dedicated to encoding aspects of sentences other than pure syntax. 6 Conclusion We present two RSA-based methods for correlating neural and syntactic representations of language, using tree kernels as a measure of similarity between syntactic trees. Our results on arithmetic expressions confirm that both versions of structured RSA capture correlations between different representation spaces, while providing complementary insights. We apply the same techniques to English sentence embeddings, and show where and to what extent each representation encodes syntactic information. The proposed methods are general and applicable not just to constituency trees, but given a similarity metric, to any symbolic representation of linguistic structures including dependency trees or Abstract Meaning Representations. We plan to explore these options in future work. A toolkit with the implementation of our methods is available at https://github.com/gchrupala/ursa. 2961 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. International Conference on Learning Representations (ICLR). Afra Alishahi, Marie Barking, and Grzegorz Chrupała. 2017. Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 368–378. Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert M¨uller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49–72. Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English Web Treebank LDC2012T13. Web Download. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Diane Bouchacourt and Marco Baroni. 2018. How agents see things: On visual representations in an emergent language game. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 981–985, Brussels, Belgium. Association for Computational Linguistics. Grzegorz Chrupała. 2019. Symbolic inductive bias for visually grounded learning of spoken language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Michael Collins and Nigel Duffy. 2002. Convolution kernels for natural language. In Advances in neural information processing systems, pages 625–632. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Association for Computational Linguistics. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136. Association for Computational Linguistics. Danilo Croce, Simone Filice, Giuseppe Castellucci, and Roberto Basili. 2017. Deep learning in semantic kernel spaces. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 345–354. Danilo Croce, Daniele Rossini, and Roberto Basili. 2018. Explaining non-linear classifier decisions within kernel-based deep architectures. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 16–24. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179–211. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907–926. Yacine Jernite, Samuel R Bowman, and David Sontag. 2017. Discourse-based objectives for fast unsupervised sentence representation learning. arXiv preprint arXiv:1705.00557. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302. Nikolaus Kriegeskorte, Marieke Mur, and Peter A Bandettini. 2008. Representational similarity analysisconnecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2:4. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In International Conference on Learning Representations. Alessandro Moschitti. 2006. Making tree kernels practical for natural language learning. In 11th conference of the European Chapter of the Association for Computational Linguistics. 2962 Allen Nie, Erin D Bennett, and Noah D Goodman. 2017. Dissent: Sentence representation learning from explicit discourse relations. arXiv preprint arXiv:1710.04334. Denis Paperno. 2018. Limitations in learning an interpreted language with recurrent models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 384–386. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Naomi Saphra and Adam Lopez. 2019. Understanding learning dynamics of language models with SVCCA. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACLHLT). Association for Computational Linguistics. Luzi Sennhauser and Robert Berwick. 2018. Evaluating the ability of lstms to learn context-free grammars. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 115–124. Association for Computational Linguistics. Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC2014). Natalia Skachkova, Thomas Trost, and Dietrich Klakow. 2018. Closing brackets with recurrent neural networks. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 232–239. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, et al. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In ICLR 2019. Lorraine Komisarjevsky Tyler, Teresa PL Cheung, Barry J Devereux, and Alex Clarke. 2013. Syntactic computations in the language network: characterizing dynamic network properties using representational similarity analysis. Frontiers in psychology, 4:271. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Daniel LK Yamins and James J DiCarlo. 2016. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3):356. Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more syntax than translation does: Lessons learned through auxiliary task analysis. arXiv preprint arXiv:1809.10040.
2019
283
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963–2977 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2963 Interpretable Neural Predictions with Differentiable Binary Variables Wilker Aziz ILLC University of Amsterdam [email protected] Ivan Titov ILLC, University of Amsterdam ILCC, University of Edinburgh [email protected] Abstract The success of neural networks comes hand in hand with a desire for more interpretability. We focus on text classifiers and make them more interpretable by having them provide a justification—a rationale—for their predictions. We approach this problem by jointly training two neural network models: a latent model that selects a rationale (i.e. a short and informative part of the input text), and a classifier that learns from the words in the rationale alone. Previous work proposed to assign binary latent masks to input positions and to promote short selections via sparsityinducing penalties such as L0 regularisation. We propose a latent model that mixes discrete and continuous behaviour allowing at the same time for binary selections and gradient-based training without REINFORCE. In our formulation, we can tractably compute the expected value of penalties such as L0, which allows us to directly optimise the model towards a prespecified text selection rate. We show that our approach is competitive with previous work on rationale extraction, and explore further uses in attention mechanisms. 1 Introduction Neural networks are bringing incredible performance gains on text classification tasks (Howard and Ruder, 2018; Peters et al., 2018; Devlin et al., 2019). However, this power comes hand in hand with a desire for more interpretability, even though its definition may differ (Lipton, 2016). While it is useful to obtain high classification accuracy, with more data available than ever before it also becomes increasingly important to justify predictions. Imagine having to classify a large collection of documents, while verifying that the classifications make sense. It would be extremely time-consuming to read each document to evaluate the results. Moreover, if we do not pours a dark amber color with decent head that does not recede much . it ’s a tad too dark to see the carbonation , but fairs well . smells of roasted malts and mouthfeel is quite strong in the sense that you can get a good taste of it before you even swallow . Rationale Extractor pours a dark amber color with decent head that does not recede much . it ’s a tad too dark to see the carbonation , but fairs well . smells of roasted malts and mouthfeel is quite strong in the sense that you can get a good taste of it before you even swallow . Classifier look: ⋆⋆⋆⋆ Figure 1: Rationale extraction for a beer review. know why a prediction was made, we do not know if we can trust it. What if the model could provide us the most important parts of the document, as a justification for its prediction? That is exactly the focus of this paper. We use a setting that was pioneered by Lei et al. (2016). A rationale is defined to be a short yet sufficient part of the input text; short so that it makes clear what is most important, and sufficient so that a correct prediction can be made from the rationale alone. One neural network learns to extract the rationale, while another neural network, with separate parameters, learns to make a prediction from just the rationale. Lei et al. model this by assigning a binary Bernoulli variable to each input word. The rationale then consists of all the words for which a 1 was sampled. Because gradients do not flow through discrete samples, the rationale extractor is optimized using REINFORCE (Williams, 1992). An L0 regularizer is used to make sure the rationale is short. We propose an alternative to purely discrete selectors for which gradient estimation is possible without REINFORCE, instead relying on a repaJasmijn Bastings ILLC University of Amsterdam [email protected] 2964 rameterization of a random variable that exhibits both continuous and discrete behavior (Louizos et al., 2017). To promote compact rationales, we employ a relaxed form of L0 regularization (Louizos et al., 2017), penalizing the objective as a function of the expected proportion of selected text. We also propose the use of Lagrangian relaxation to target a specific rate of selected input text. Our contributions are summarized as follows:1 1. we present a differentiable approach to extractive rationales (§2) including an objective that allows for specifying how much text is to be extracted (§4); 2. we introduce HardKuma (§3), which gives support to binary outcomes and allows for reparameterized gradient estimates; 3. we empirically show that our approach is competitive with previous work and that HardKuma has further applications, e.g. in attention mechanisms. (§6). 2 Latent Rationale We are interested in making NN-based text classifiers interpretable by (i) uncovering which parts of the input text contribute features for classification, and (ii) basing decisions on only a fraction of the input text (a rationale). Lei et al. (2016) approached (i) by inducing binary latent selectors that control which input positions are available to an NN encoder that learns features for classification/regression, and (ii) by regularising their architectures using sparsity-inducing penalties on latent assignments. In this section we put their approach under a probabilistic light, and this will then more naturally lead to our proposed method. In text classification, an input x is mapped to a distribution over target labels: Y |x ∼Cat(f(x; θ)) , (1) where we have a neural network architecture f(·; θ) parameterize the model—θ collectively denotes the parameters of the NN layers in f. That is, an NN maps from data space (e.g. sentences, short paragraphs, or premise-hypothesis pairs) to the categorical parameter space (i.e. a vector of class probabilities). For the sake of concreteness, 1Code available at https://github.com/ bastings/interpretable_predictions. consider the input a sequence x = ⟨x1, . . . , xn⟩. A target y is typically a categorical outcome, such as a sentiment class or an entailment decision, but with an appropriate choice of likelihood it could also be a numerical score (continuous or integer). Lei et al. (2016) augment this model with a collection of latent variables which we denote by z = ⟨z1, . . . , zn⟩. These variables are responsible for regulating which portions of the input x contribute with predictors (i.e. features) to the classifier. The model formulation changes as follows: Zi|x ∼Bern(gi(x; φ)) Y |x, z ∼Cat(f(x ⊙z; θ)) (2) where an NN g(·; φ) predicts a sequence of n Bernoulli parameters—one per latent variable— and the classifier is modified such that zi indicates whether or not xi is available for encoding. We can think of the sequence z as a binary gating mechanism used to select a rationale, which with some abuse of notation we denote by x⊙z. Figure 1 illustrates the approach. Parameter estimation for this model can be done by maximizing a lower bound E(φ, θ) on the loglikelihood of the data derived by application of Jensen’s inequality:2 log P(y|x) = log EP(z|x,φ) [P(y|x, z, θ)] JI ≥EP(z|x,φ) [log P(y|x, z, θ)] = E(φ, θ) . (3) These latent rationales approach the first objective, namely, uncovering which parts of the input text contribute towards a decision. However note that an NN controls the Bernoulli parameters, thus nothing prevents this NN from selecting the whole of the input, thus defaulting to a standard text classifier. To promote compact rationales, Lei et al. (2016) impose sparsity-inducing penalties on latent selectors. They penalise for the total number of selected words, L0 in (4), as well as, for the total number of transitions, fused lasso in (4), and approach the following optimization problem min φ,θ −E(φ, θ)+λ0 n X i=1 zi | {z } L0(z) +λ1 n−1 X i=1 |zi −zi+1| | {z } fused lasso (4) via gradient-based optimisation, where λ0 and λ1 are fixed hyperparameters. The objective is however intractable to compute, the lowerbound, in 2This can be seen as variational inference (Jordan et al., 1999) where we perform approximate inference using a datadependent prior P(z|x, φ). 2965 particular, requires marginalization of O(2n) binary sequences. For that reason, Lei et al. sample latent assignments and work with gradient estimates using REINFORCE (Williams, 1992). The key ingredients are, therefore, binary latent variables and sparsity-inducing regularization, and therefore the solution is marked by nondifferentiability. We propose to replace Bernoulli variables by rectified continuous random variables (Socci et al., 1998), for they exhibit both discrete and continuous behaviour. Moreover, they are amenable to reparameterization in terms of a fixed random source (Kingma and Welling, 2014), in which case gradient estimation is possible without REINFORCE. Following Louizos et al. (2017), we exploit one such distribution to relax L0 regularization and thus promote compact rationales with a differentiable objective. In section 3, we introduce this distribution and present its properties. In section 4, we employ a Lagrangian relaxation to automatically target a pre-specified selection rate. And finally, in section 5 we present an example for sentiment classification. 3 Hard Kumaraswamy Distribution Key to our model is a novel distribution that exhibits both continuous and discrete behaviour, in this section we introduce it. With non-negligible probability, samples from this distribution evaluate to exactly 0 or exactly 1. In a nutshell: i) we start from a distribution over the open interval (0, 1) (see dashed curve in Figure 2); ii) we then stretch its support from l < 0 to r > 1 in order to include {0} and {1} (see solid curve in Figure 2); finally, iii) we collapse the probability mass over the interval (l, 0] to {0}, and similarly, the probability mass over the interval [1, r) to {1} (shaded areas in Figure 2). This stretch-and-rectify technique was proposed by Louizos et al. (2017), who rectified samples from the BinaryConcrete (or GumbelSoftmax) distribution (Maddison et al., 2017; Jang et al., 2017). We adapted their technique to the Kumaraswamy distribution motivated by its close resemblance to a Beta distribution, for which we have stronger intuitions (for example, its two shape parameters transit rather naturally from unimodal to bimodal configurations of the distribution). In the following, we introduce this new distribution formally.3 3We use uppercase letters for random variables (e.g. K, T, and H) and lowercase for assignments (e.g. k, t, h). For a 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 Kuma(0.5, 0.5, ­0.1, 1.1) Kuma(0.5, 0.5) Figure 2: The HardKuma distribution: we start from a Kuma(0.5, 0.5), and stretch its support to the interval (−0.1, 1.1), finally we collapse all mass before 0 to {0} and all mass after 1 to {1}. 3.1 Kumaraswamy distribution The Kumaraswamy distribution (Kumaraswamy, 1980) is a two-parameters distribution over the open interval (0, 1), we denote a Kumaraswamydistributed variable by K ∼Kuma(a, b), where a ∈R>0 and b ∈R>0 control the distribution’s shape. The dashed curve in Figure 2 illustrates the density of Kuma(0.5, 0.5). For more details including its pdf and cdf, consult Appendix A. The Kumaraswamy is a close relative of the Beta distribution, though not itself an exponential family, with a simple cdf whose inverse F −1 K (u; a, b) =  1 −(1 −u) 1/b1/a , (5) for u ∈[0, 1], can be used to obtain samples F −1 K (U; α, β) ∼Kuma(α, β) (6) by transformation of a uniform random source U ∼U(0, 1). We can use this fact to reparameterize expectations (Nalisnick and Smyth, 2016). 3.2 Rectified Kumaraswamy We stretch the support of the Kumaraswamy distribution to include 0 and 1. The resulting variable T ∼Kuma(a, b, l, r) takes on values in the open interval (l, r) where l < 0 and r > 1, with cdf FT (t; a, b, l, r) = FK((t −l)/(r −l); a, b) . (7) We now define a rectified random variable, denoted by H ∼HardKuma(a, b, l, r), by passing random variable K, fK(k; α) is the probability density function (pdf), conditioned on parameters α, and FK(k; α) is the cumulative distribution function (cdf). 2966 a sample T ∼Kuma(a, b, l, r) through a hardsigmoid, i.e. h = min(1, max(0, t)). The resulting variable is defined over the closed interval [0, 1]. Note that while there is 0 probability of sampling t = 0, sampling h = 0 corresponds to sampling any t ∈(l, 0], a set whose mass under Kuma(t|a, b, l, r) is available in closed form: P(H = 0) = FK  −l r−l; a, b  . (8) That is because all negative values of t are deterministically mapped to zero. Similarly, samples t ∈[1, r) are all deterministically mapped to h = 1, whose total mass amounts to P(H = 1) = 1 −FK  1−l r−l; a, b  . (9) See Figure 2 for an illustration, and Appendix A for the complete derivations. 3.3 Reparameterization and gradients Because this rectified variable is built upon a Kumaraswamy, it admits a reparameterisation in terms of a uniform variable U ∼U(0, 1). We need to first sample a uniform variable in the open interval (0, 1) and transform the result to a Kumaraswamy variable via the inverse cdf (10a), then shift and scale the result to cover the stretched support (10b), and finally, apply the rectifier in order to get a sample in the closed interval [0, 1] (10c). k = F −1 K (u; a, b) (10a) t = l + (r −l)k (10b) h = min(1, max(0, t)) , (10c) We denote this h = s(u; a, b, l, r) for short. Note that this transformation has two discontinuity points, namely, t = 0 and t = 1. Though recall, the probability of sampling t exactly 0 or exactly 1 is zero, which essentially means stochasticity circumvents points of non-differentiability of the rectifier (see Appendix A.3). 4 Controlled Sparsity Following Louizos et al. (2017), we relax nondifferentiable penalties by computing them on expectation under our latent model p(z|x, φ). In addition, we propose the use of Lagrangian relaxation to target specific values for the penalties. Thanks to the tractable Kumaraswamy cdf, the expected value of L0(z) is known in closed form Ep(z|x) [L0(z)] ind = n X i=1 Ep(zi|x) [I[zi ̸= 0]] = n X i=1 1 −P(Zi = 0) , (11) where P(Zi = 0) = FK  −l r−l; ai, bi  . This quantity is a tractable and differentiable function of the parameters φ of the latent model. We can also compute a relaxation of fused lasso by computing the expected number of zero-to-nonzero and nonzero-to-zero changes: Ep(z|x) "n−1 X i=1 I[zi = 0, zi+1 ̸= 0] # + Ep(z|x) "n−1 X i=1 I[zi ̸= 0, zi+1 = 0] # ind = n−1 X i=1 P(Zi = 0)(1 −P(Zi+1 = 0)) + (1 −P(Zi = 0))P(Zi+1 = 0) . (12) In both cases, we make the assumption that latent variables are independent given x, in Appendix B.1.2 we discuss how to estimate the regularizers for a model p(zi|x, z<i) that conditions on the prefix z<i of sampled HardKuma assignments. We can use regularizers to promote sparsity, but just how much text will our final model select? Ideally, we would target specific values r and solve a constrained optimization problem. In practice, constrained optimisation is very challenging, thus we employ Lagrangian relaxation instead: max λ∈R min φ,θ −E(φ, θ) + λ⊤(R(φ) −r) (13) where R(φ) is a vector of regularisers, e.g. expected L0 and expected fused lasso, and λ is a vector of Lagrangian multipliers λ. Note how this differs from the treatment of Lei et al. (2016) shown in (4) where regularizers are computed for assignments, rather than on expectation, and where λ0, λ1 are fixed hyperparameters. 5 Sentiment Classification As a concrete example, consider the case of sentiment classification where x is a sentence and y is a 2967 5-way sentiment class (from very negative to very positive). The model consists of Zi ∼HardKuma(ai, bi, l, r) Y |x, z ∼Cat(f(x ⊙z; θ)) (14) where the shape parameters a, b = g(x; φ), i.e. two sequences of n strictly positive scalars, are predicted by a NN, and the support boundaries (l, r) are fixed hyperparameters. We first specify an architecture that parameterizes latent selectors and then use a reparameterized sample to restrict which parts of the input contribute encodings for classification:4 ei = emb(xi) hn 1 = birnn(en 1; φr) ui ∼U(0, 1) ai = fa(hi; φa) bi = fb(hi; φb) zi = s(ui; ai, bi, l, r) where emb(·) is an embedding layer, birnn(·; φr) is a bidirectional encoder, fa(·; φa) and fb(·; φb) are feed-forward transformations with softplus outputs, and s(·) turns the uniform sample ui into the latent selector zi (see §3). We then use the sampled z to modulate inputs to the classifier: ei = emb(xi) h(fwd) i = rnn(h(fwd) i−1 , zi ei; θfwd) h(bwd) i = rnn(h(bwd) i+1 , zi ei; θbwd) o = fo(h(fwd) n , h(bwd) 1 ; θo) where rnn(·; θfwd) and rnn(·; θbwd) are recurrent cells such as LSTMs (Hochreiter and Schmidhuber, 1997) that process the sequence in different directions, and fo(·; θo) is a feed-forward transformation with softmax output. Note how zi modulates features ei of the input xi that are available to the recurrent composition function. We then obtain gradient estimates of E(φ, θ) via Monte Carlo (MC) sampling from E(φ, θ) = EU(0,I) [log P(y|x, sφ(u, x), θ)] (15) where z = sφ(u, x) is a shorthand for elementwise application of the transformation from uniform samples to HardKuma samples. This reparameterisation is the key to gradient estimation through stochastic computation graphs (Kingma and Welling, 2014; Rezende et al., 2014). 4We describe architectures using blocks denoted by layer(inputs; subset of parameters), boldface letters for vectors, and the shorthand vn 1 for a sequence ⟨v1, . . . , vn⟩. SVM (Lei et al., 2016) 0.0154 BiLSTM (Lei et al., 2016) 0.0094 BiRCNN (Lei et al., 2016) 0.0087 BiLSTM (ours) 0.0089 BiRCNN (ours) 0.0088 Table 1: MSE on the BeerAdvocate test set. Deterministic predictions. At test time we make predictions based on what is the most likely assignment for each zi. We arg max across configurations of the distribution, namely, zi = 0, zi = 1, or 0 < zi < 1. When the continuous interval is more likely, we take the expected value of the underlying Kumaraswamy variable. 6 Experiments We perform experiments on multi-aspect sentiment analysis to compare with previous work, as well as experiments on sentiment classification and natural language inference. All models were implemented in PyTorch, and Appendix B provides implementation details. Goal. When rationalizing predictions, our goal is to perform as well as systems using the full input text, while using only a subset of the input text, leaving unnecessary words out for interpretability. 6.1 Multi-aspect Sentiment Analysis In our first experiment we compare directly with previous work on rationalizing predictions (Lei et al., 2016). We replicate their setting. Data. A pre-processed subset of the BeerAdvocate5 data set is used (McAuley et al., 2012). It consists of 220,000 beer reviews, where multiple aspects (e.g. look, smell, taste) are rated. As shown in Figure 1, a review typically consists of multiple sentences, and contains a 0-5 star rating (e.g. 3.5 stars) for each aspect. Lei et al. mapped the ratings to scalars in [0, 1]. Model. We use the models described in §5 with two small modifications: 1) since this is a regression task, we use a sigmoid activation in the output layer of the classifier rather than a softmax,6 and 5https://www.beeradvocate.com/ 6From a likelihood learning point of view, we would have assumed a Logit-Normal likelihood, however, to stay closer to Lei et al. (2016), we employ mean squared error. 2968 Method Look Smell Taste % Precision % Selected % Precision % Selected % Precision % Selected Attention (Lei et al.) 80.6 13 88.4 7 65.3 7 Bernoulli (Lei et al.) 96.3 14 95.1 7 80.2 7 Bernoulli (reimpl.) 94.8 13 95.1 7 80.5 7 HardKuma 98.1 13 96.8 7 89.8 7 Table 2: Precision (% of selected words that was also annotated as the gold rationale) and selected (% of words not zeroed out) per aspect. In the attention baseline, the top 13% (7%) of words with highest attention weights are used for classification. Models were selected based on validation loss. 2) we use an extra RNN to condition zi on z<i: ai = fa(hi, si−1; φa) (16a) bi = fb(hi, si−1; φb) (16b) si = rnn(hi, zi, si−1; φs) (16c) For a fair comparison we follow Lei et al. by using RCNN7 cells rather than LSTM cells for encoding sentences on this task. Since this cell is not widely used, we verified its performance in Table 1. We observe that the BiRCNN performs on par with the BiLSTM (while using 50% fewer parameters), and similarly to previous results. Evaluation. A test set with sentence-level rationale annotations is available. The precision of a rationale is defined as the percentage of words with z ̸= 0 that is part of the annotation. We also evaluate the predictions made from the rationale using mean squared error (MSE). Baselines. For our baseline we reimplemented the approach of Lei et al. (2016) which we call Bernoulli after the distribution they use to sample z from. We also report their attention baseline, in which an attention score is computed for each word, after which it is simply thresholded to select the top-k percent as the rationale. Results. Table 2 shows the precision and the percentage of selected words for the first three aspects. The models here have been selected based on validation MSE and were tuned to select a similar percentage of words (‘selected’). We observe that our Bernoulli reimplementation reaches the precision similar to previous work, doing a little bit worse for the ‘look’ aspect. Our HardKuma managed to get even higher precision, and it extracted exactly the percentage of text that we spec7An RCNN cell can replace any LSTM cell and works well on text classification problems. See appendix B. 0% 20% 40% 60% 80% 100% Selected Text 0.008 0.009 0.010 0.011 0.012 0.013 MSE Figure 3: MSE of all aspects for various percentages of extracted text. HardKuma (blue crosses) has lower error than Bernoulli (red circles; open circles taken from Lei et al. (2016)) for similar amount of extracted text. The full-text baseline (black star) gets the best MSE. ified (see §4).8 Figure 3 shows the MSE for all aspects for various percentages of extracted text. We observe that HardKuma does better with a smaller percentage of text selected. The performance becomes more similar as more text is selected. 6.2 Sentiment Classification We also experiment on the Stanford Sentiment Treebank (SST) (Socher et al., 2013). There are 5 sentiment classes: very negative, negative, neutral, positive, and very positive. Here we use the HardKuma model described in §5, a Bernoulli model trained with REINFORCE, as well as a BiLSTM. Results. Figure 4 shows the classification accuracy for various percentages of selected text. We observe that HardKuma outperforms the Bernoulli model at each percentage of selected text. HardKuma reaches full-text baseline performance already around 40% extracted text. At that point, it obtains a test score of 45.84, versus 42.22 for Bernoulli and 47.4±0.8 for the full-text baseline. 8We tried to use Lagrangian relaxation for the Bernoulli model, but this led to instabilities (e.g. all words selected). 2969 0% 20% 40% 60% 80% 100% Selected Text 30% 35% 40% 45% 50% Accuracy Figure 4: SST validation accuracy for various percentages of extracted text. HardKuma (blue crosses) has higher accuracy than Bernoulli (red circles) for similar amount of text, and reaches the full-text baseline (black star, 46.3 ± 2σ with σ = 0.7) around 40% text. very negative negative neutral positive very positive 146 992 18231 1511 394 119 603 3378 803 264 112 489 3806 795 299 Total HardKuma Bernoulli Figure 5: The number of words in each sentiment class for the full validation set, the HardKuma (24% selected text) and Bernoulli (25% text). Analysis. We wonder what kind of words are dropped when we select smaller amounts of text. For this analysis we exploit the word-level sentiment annotations in SST, which allows us to track the sentiment of words in the rationale. Figure 5 shows that a large portion of dropped words have neutral sentiment, and it seems plausible that exactly those words are not important features for classification. We also see that HardKuma drops (relatively) more neutral words than Bernoulli. 6.3 Natural Language Inference In Natural language inference (NLI), given a premise sentence x(p) and a hypothesis sentence x(h), the goal is to predict their relation y which can be contradiction, entailment, or neutral. As our dataset we use the Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015). Baseline. We use the Decomposable Attention model (DA) of Parikh et al. (2016).9 DA does not make use of LSTMs, but rather uses attention to find connections between the premise and the hy9Better results e.g. Chen et al. (2017) and data sets for NLI exist, but are not the focus of this paper. pothesis that are predictive of the relation. Each word in the premise attends to each word in the hypothesis, and vice versa, resulting in a set of comparison vectors which are then aggregated for a final prediction. If there is no link between a word pair, it is not considered for prediction. Model. Because the premise and hypothesis interact, it does not make sense to extract a rationale for the premise and hypothesis independently. Instead, we replace the attention between premise and hypothesis with HardKuma attention. Whereas in the baseline a similarity matrix is softmax-normalized across rows (premise to hypothesis) and columns (hypothesis to premise) to produce attention matrices, in our model each cell in the attention matrix is sampled from a HardKuma parameterized by (a, b). To promote sparsity, we use the relaxed L0 to specify the desired percentage of non-zero attention cells. The resulting matrix does not need further normalization. Results. With a target rate of 10%, the HardKuma model achieved 8.5% non-zero attention. Table 3 shows that, even with so many zeros in the attention matrices, it only does about 1% worse compared to the DA baseline. Figure 6 shows an example of HardKuma attention, with additional examples in Appendix B. We leave further explorations with HardKuma attention for future work. Model Dev Test LSTM (Bowman et al., 2016) – 80.6 DA (Parikh et al., 2016) – 86.3 DA (reimplementation) 86.9 86.5 DA with HardKuma attention 86.0 85.5 Table 3: SNLI results (accuracy). <s> The man is walking his cat . <s> Young man walking dog  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 77 21  0  0  0  0  0  0  0  0 88  0  0  0  0  0  0  0  0  0 86  0 Figure 6: Example of HardKuma attention between a premise (rows) and hypothesis (columns) in SNLI (cell values shown in multiples of 10−2). 2970 7 Related Work This work has connections with work on interpretability, learning from rationales, sparse structures, and rectified distributions. We discuss each of those areas. Interpretability. Machine learning research has been focusing more and more on interpretability (Gilpin et al., 2018). However, there are many nuances to interpretability (Lipton, 2016), and amongst them we focus on model transparency. One strategy is to extract a simpler, interpretable model from a neural network, though this comes at the cost of performance. For example, Thrun (1995) extract if-then rules, while Craven and Shavlik (1996) extract decision trees. There is also work on making word vectors more interpretable. Faruqui et al. (2015) make word vectors more sparse, and Herbelot and Vecchi (2015) learn to map distributional word vectors to model-theoretic semantic vectors. Similarly to Lei et al. (2016), Titov and McDonald (2008) extract informative fragments of text by jointly training a classifier and a model predicting a stochastic mask, while relying on Gibbs sampling to do so. Their focus is on using the sentiment labels as a weak supervision signal for opinion summarization rather than on rationalizing classifier predictions. There are also related approaches that aim to interpret an already-trained model, in contrast to Lei et al. (2016) and our approach where the rationale is jointly modeled. Ribeiro et al. (2016) make any classifier interpretable by approximating it locally with a linear proxy model in an approach called LIME, and Alvarez-Melis and Jaakkola (2017) propose a framework that returns input-output pairs that are causally related. Learning from rationales. Our work is different from approaches that aim to improve classification using rationales as an additional input (Zaidan et al., 2007; Zaidan and Eisner, 2008; Zhang et al., 2016). Instead, our rationales are latent and we are interested in uncovering them. We only use annotated rationales for evaluation. Sparse layers. Also arguing for enhanced interpretability, Niculae and Blondel (2017) propose a framework for learning sparsely activated attention layers based on smoothing the max operator. They derive a number of relaxations to max, including softmax itself, but in particular, they target relaxations such as sparsemax (Martins and Astudillo, 2016) which, unlike softmax, are sparse (i.e. produce vectors of probability values with components that evaluate to exactly 0). Their activation functions are themselves solutions to convex optimization problems, to which they provide efficient forward and backward passes. The technique can be seen as a deterministic sparsely activated layer which they use as a drop-in replacement to standard attention mechanisms. In contrast, in this paper we focus on binary outcomes rather than K-valued ones. Niculae et al. (2018) extend the framework to structured discrete spaces where they learn sparse parameterizations of discrete latent models. In this context, parameter estimation requires exact marginalization of discrete variables or gradient estimation via REINFORCE. They show that oftentimes distributions are sparse enough to enable exact marginal inference. Peng et al. (2018) propose SPIGOT, a proxy gradient to the non-differentiable arg max operator. This proxy requires an arg max solver (e.g. Viterbi for structured prediction) and, like the straight-through estimator (Bengio et al., 2013), is a biased estimator. Though, unlike ST it is efficient for structured variables. In contrast, in this work we chose to focus on unbiased estimators. Rectified Distributions. The idea of rectified distributions has been around for some time. The rectified Gaussian distribution (Socci et al., 1998), in particular, has found applications to factor analysis (Harva and Kaban, 2005) and approximate inference in graphical models (Winn and Bishop, 2005). Louizos et al. (2017) propose to stretch and rectify samples from the BinaryConcrete (or GumbelSoftmax) distribution (Maddison et al., 2017; Jang et al., 2017). They use rectified variables to induce sparsity in parameter space via a relaxation to L0. We adapt their technique to promote sparse activations instead. Rolfe (2017) learns a relaxation of a discrete random variable based on a tractable mixture of a point mass at zero and a continuous reparameterizable density, thus enabling reparameterized sampling from the half-closed interval [0, ∞). In contrast, with HardKuma we focused on giving support to both 0s and 1s. 8 Conclusions We presented a differentiable approach to extractive rationales, including an objective that allows 2971 for specifying how much text is to be extracted. To allow for reparameterized gradient estimates and support for binary outcomes we introduced the HardKuma distribution. Apart from extracting rationales, we showed that HardKuma has further potential uses, which we demonstrated on premise-hypothesis attention in SNLI. We leave further explorations for future work. Acknowledgments We thank Luca Falorsi for pointing us to Louizos et al. (2017), which inspired the HardKumaraswamy distribution. This work has received funding from the European Research Council (ERC StG BroadSem 678254), the European Union’s Horizon 2020 research and innovation programme (grant agreement No 825299, GoURMET), and the Dutch National Science Foundation (NWO VIDI 639.022.518, NWO VICI 277-89-002). References David Alvarez-Melis and Tommi Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 412– 421. Association for Computational Linguistics. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1466–1477. Association for Computational Linguistics. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668. Association for Computational Linguistics. Mark Craven and Jude W Shavlik. 1996. Extracting tree-structured representations of trained networks. In Advances in neural information processing systems, pages 24–30. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics. Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse overcomplete word vector representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1491–1500. Association for Computational Linguistics. Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pages 80–89. IEEE. Markus Harva and Ata Kaban. 2005. A variational bayesian method for rectified factor analysis. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 1, pages 185–190. IEEE. Aur´elie Herbelot and Eva Maria Vecchi. 2015. Building a shared world: mapping distributional to modeltheoretic semantic spaces. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 22–32. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735–1780. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339. Association for Computational Linguistics. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. International Conference on Learning Representations. MichaelI. Jordan, Zoubin Ghahramani, TommiS. Jaakkola, and LawrenceK. Saul. 1999. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In International Conference on Learning Representations. 2972 Ponnambalam Kumaraswamy. 1980. A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(12):79–88. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117. Association for Computational Linguistics. Zachary Chase Lipton. 2016. The mythos of model interpretability. ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). Christos Louizos, Max Welling, and Diederik P Kingma. 2017. Learning sparse neural networks through l 0 regularization. arXiv preprint arXiv:1712.01312. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continous relaxation of discrete random variables. International Conference on Learning Representations. Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In International Conference on Machine Learning, pages 1614–1623. Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multiaspect reviews. In Data Mining (ICDM), 2012 IEEE 12th International Conference on, pages 1020– 1025. IEEE. Eric Nalisnick and Padhraic Smyth. 2016. Stickbreaking variational autoencoders. arXiv preprint arXiv:1605.06197. Vlad Niculae and Mathieu Blondel. 2017. A regularized framework for sparse and structured neural attention. In Advances in Neural Information Processing Systems, pages 3338–3348. Vlad Niculae, Andr´e F. T. Martins, and Claire Cardie. 2018. Towards dynamic computation graphs via sparse latent structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 905–911. Association for Computational Linguistics. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255. Association for Computational Linguistics. Hao Peng, Sam Thomson, and Noah A. Smith. 2018. Backpropagating through structured argmax using a spigot. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1863–1873. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1278–1286, Bejing, China. PMLR. Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 97–101. Association for Computational Linguistics. Jason Tyler Rolfe. 2017. Discrete variational autoencoders. In ICLR. Nicholas D. Socci, Daniel D. Lee, and H. Sebastian Seung. 1998. The rectified gaussian distribution. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Information Processing Systems 10, pages 350–356. MIT Press. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. Association for Computational Linguistics. Sebastian Thrun. 1995. Extracting rules from artificial neural networks with distributed representations. In Advances in neural information processing systems, pages 505–512. Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of ACL. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. John Winn and Christopher M Bishop. 2005. Variational message passing. Journal of Machine Learning Research, 6(Apr):661–694. Omar Zaidan and Jason Eisner. 2008. Modeling annotators: A generative approach to learning from annotator rationales. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 31–40, Honolulu, Hawaii. Association for Computational Linguistics. 2973 Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using “annotator rationales” to improve machine learning for text categorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 260–267. Association for Computational Linguistics. Ye Zhang, Iain Marshall, and Byron C. Wallace. 2016. Rationale-augmented convolutional neural networks for text classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 795–804, Austin, Texas. Association for Computational Linguistics. 2974 A Kumaraswamy distribution 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 (0.5, 0.5) (5, 1) (2, 2) (2, 5) (0.1, 0.1) (1.0, 1.0) Figure 7: Kuma plots for various (a, b) parameters. A Kumaraswamy-distributed variable K ∼ Kuma(a, b) takes on values in the open interval (0, 1) and has density fK(k; a, b) = abka−1(1 −ka)b−1 , (17) where a ∈R>0 and b ∈R>0 are shape parameters. Its cumulative distribution takes a simple closed-form expression FK(k; a, b) = Z k 0 fK(ξ|a, b)dξ (18a) = 1 −(1 −ka)b , (18b) with inverse F −1 K (u; a, b) =  1 −(1 −u) 1/b1/a . (19) A.1 Generalised-support Kumaraswamy We can generalise the support of a Kumaraswamy variable by specifying two constants l < r and transforming a random variable K ∼Kuma(a, b) to obtain T ∼Kuma(a, b, l, r) as shown in (20, left). t = l + (r −l)k k = (t −l)/(r −l) (20) The density of the resulting variable is fT (t; a, b, l, r) (21a) = fK  t−l r−l; a, b  dk dt (21b) = fK  t−l r−l; a, b  1 (r −l) (21c) where r −l > 0 by definition. This affine transformation leaves the cdf unchanged, i.e. FT (t0; a, b, l, r) = Z t0 −∞ fT (t; a, b, l, r)dt = Z t0 −∞ fK  t−l r−l; a, b  1 (r −l)dt = Z t0−l r−l −∞ fK(k; a, b) 1 (r −l)(r −l)dk = FK  t0−l r−l ; a, b  . (22) Thus we can obtain samples from this generalisedsupport Kumaraswamy by sampling from a uniform distribution U(0, 1), applying the inverse transform (19), then shifting and scaling the sample according to (20, left). A.2 Rectified Kumaraswamy First, we stretch a Kumaraswamy distribution to include 0 and 1 in its support, that is, with l < 0 and r > 1, we define T ∼Kuma(a, b, l, r). Then we apply a hard-sigmoid transformation to this variable, that is, h = min(0, max(1, t)), which results in a rectified distribution which gives support to the closed interval [0, 1]. We denote this rectified variable by H ∼HardKuma(a, b, l, r) (23) whose distribution function is fH(h; a, b, l, r) = P(h = 0)δ(h) + P(h = 1)δ(h −1) + P(0 < h < 1)fT (h; a, b, l, r)1(0,1)(h) P(0 < h < 1) (24) where P(h = 0) = P(t ≤0) = FT (0; a, b, l, r) = FK( −l/(r −l); a, b) (25) is the probability of sampling exactly 0, where P(h = 1) = P(t ≥1) = 1 −P(t < 1) = 1 −FT (1; a, b, l, r) = 1 −FK((1 −l)/(r −l); a, b) (26) is the probability of sampling exactly 1, and P(0 < h < 1) = 1 −P(h = 0) −P(h = 1) (27) is the probability of drawing a continuous value in (0, 1). Note that we used the result in (22) to express these probabilities in terms of the tractable cdf of the original Kumaraswamy variable. 2975 A.3 Reparameterized gradients Let us consider the case where we need derivatives of a function L(u) of the underlying uniform variable u, as when we compute reparameterized gradients in variational inference. Note that ∂L ∂u = ∂L ∂h × ∂h ∂t × ∂t ∂k × ∂k ∂u , (28) by chain rule. The term ∂L ∂h depends on a differentiable observation model and poses no challenge; the term ∂h ∂t is the derivative of the hard-sigmoid function, which is 0 for t < 0 or t > 1, 1 for 0 < t < 1, and undefined for t ∈{0, 1}; the term ∂t ∂k = r −l follows directly from (20, left); the term ∂k ∂u = ∂ ∂uF −1 K (u; a, b) depends on the Kumaraswamy inverse cdf (19) and also poses no challenge. Thus the only two discontinuities happen for t ∈{0, 1}, which is a 0 measure set under the stretched Kumaraswamy: we say this reparameterisation is differentiable almost everywhere, a useful property which essentially circumvents the discontinuity points of the rectifier. A.4 HardKumaraswamy PDF and CDF Figure 8 plots the pdf of the HardKumaraswamy for various a and b parameters. Figure 9 does the same but with the cdf. Figure 8: HardKuma pdf for various (a, b). B Implementation Details B.1 Multi-aspect Sentiment Analysis Our hyperparameters are taken from Lei et al. (2016) and listed in Table 4. The pre-trained word embeddings and data sets are available online at http://people.csail.mit.edu/ taolei/beer/. We train for 100 epochs and Figure 9: HardKuma cdf for various (a, b). select the best models based on validation loss. For the MSE trade-off experiments on all aspects combined, we train for a maximum of 50 epochs. Optimizer Adam Learning rate 0.0004 Word embeddings 200D (Wiki, fixed) Hidden size 200 Batch size 256 Dropout 0.1, 0.2 Weight decay 1 ∗10−6 Cell RCNN Table 4: Beer hyperparameters. For the Bernoulli baselines we vary L0 weight λ1 among {0.0002, 0.0003, 0.0004}, just as in the original paper. We set the fused lasso (coherence) weight λ2 to 2 ∗λ1. For the HardKuma models we set a target selection rate to the values targeted in Table 2, and optimize to this end using the Lagrange multiplier. We chose the fused lasso weight from {0.0001, 0.0002, 0.0003, 0.0004}. B.1.1 Recurrent Unit In our multi-aspect sentiment analysis experiments we use the RCNN of Lei et al. (2016). Intuitively, the RCNN is supposed to capture n-gram features that are not necessarily consecutive. We use the bigram version (filter width n = 2) used in 2976 Lei et al. (2016), which is defined as: λt = σ(W λxt + U λht−1 + bλ) c(1) t = λt ⊙c(1) t−1 + (1 −λt) ⊙W1xt c(2) t = λt ⊙c(2) t−1 + (1 −λt) ⊙(c(1) t−1 + W2xt) ht = tanh  c(2) t + b  B.1.2 Expected values for dependent latent variables The expected L0 is a chain of nested expectations, and we solve each term Ep(zi|x,z<i) [I[zi ̸= 0] | z<i] = 1 −FK  −l r−l; ai, bi  (29) as a function of a sampled prefix, and the shape parameters ai, bi = gi(x, z<i; φ) are predicted in sequence. B.2 Sentiment Classification (SST) For sentiment classification we make use of the PyTorch bidirectional LSTM module for encoding sentences, for both the rationale extractor and the classifier. The BiLSTM final states are concatenated, after which a linear layer followed by a softmax produces the prediction. Hyperparameters are listed in Table 5. We apply dropout to the embeddings and to the input of the output layer. Optimizer Adam Learning rate 0.0002 Word embeddings 300D Glove (fixed) Hidden size 150 Batch size 25 Dropout 0.5 Weight decay 1 ∗10−6 Cell LSTM Table 5: SST hyperparameters. B.3 Natural Language Inference (SNLI) Our hyperparameters are taken from Parikh et al. (2016) and listed in Table 6. Different from Parikh et al. is that we use Adam as the optimizer and a batch size of 64. Word embeddings are projected to 200 dimensions with a trained linear layer. Unknown words are mapped to 100 unknown word classes based on the MD5 hash function, just as in Parikh et al. (2016), and unknown word vectors are randomly initialized. We train for 100 epochs, evaluate every 1000 updates, and select the best model based on validation loss. Figure 10 shows a correct and incorrect example with HardKuma attention for each relation type (entailment, contradiction, neutral). Optimizer Adam Learning rate 0.0001 Word embeddings 300D (Glove, fixed) Hidden size 200 Batch size 64 Dropout 0.2 Table 6: SNLI hyperparameters. 2977 <s> The two dogs are black . <s> Two black dogs running  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 100  0  0  0  0 90  0  0  0  0  0  0 23  0  0  0 (a) Entailment (correct) <s> Four people in a kitchen cooking . <s> Four people in a kitchen  0  0  0  0  0  0  0  0  0 89  0  0  0  0  0  0  0  0 53  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 100 74  0 (b) Entailment (incorrect, pred: neutral) <s> Three cats race on a track . <s> Three dogs racing on racetrack  0  0  0  0  0  0  0  0  0 84  0  0  0  0  0  0  0  0 100  0  0  0 18  0  0  0  0 87  0  0 43  0  0  0  0  0  0  0  0  0  0  0 33 48  0  0 73  0 (c) Contradiction (correct) <s> a couple on a motorcycle <s> A person on a motorcycle .  0  0  0  0  0  0  0  0  0  0  0  0  0  0 15  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 89  0  0  0  0  0  0 (d) Contradiction (incorrect, pred: entailment) <s> They are in the desert . <s> People walking through dirt .  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 81  0  0  0  0  0  0  0  0 (e) Neutral (correct) <s> A dog found a bone <s> A dog gnawing on a bone .  0  0  0  0  0  0  0  0  0  0  0  0  0  0 89 13  0 12  0  0  0  0  0 47  0  0  0  0  0  0  0  0  0  0  0  0  0  0 12 14  0 76  0  0  0  0  0  0 (f) Neutral (incorrect, pred: entailment) Figure 10: HardKuma attention in SNLI for entailment, contradiction, and neutral.
2019
284
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2978 Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context Zihang Dai⇤12, Zhilin Yang⇤12, Yiming Yang1, Jaime Carbonell1, Quoc V. Le2, Ruslan Salakhutdinov1 1Carnegie Mellon University, 2Google Brain {dzihang,zhiliny,yiming,jgc,rsalakhu}@cs.cmu.edu, [email protected] Abstract Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, TransformerXL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-ofthe-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch1. 1 Introduction Language modeling is among the important problems that require modeling long-term dependency, with successful applications such as unsupervised pretraining (Dai and Le, 2015; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018). However, it has been a challenge to equip neural networks with the capability to model long-term dependency in sequential data. Recurrent neural networks (RNNs), in particular Long Short⇤Equal contribution. Order determined by swapping the one in Yang et al. (2017). 1https://github.com/kimiyoung/ transformer-xl Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997), have been a standard solution to language modeling and obtained strong results on multiple benchmarks. Despite the wide adaption, RNNs are difficult to optimize due to gradient vanishing and explosion (Hochreiter et al., 2001), and the introduction of gating in LSTMs and the gradient clipping technique (Graves, 2013) might not be sufficient to fully address this issue. Empirically, previous work has found that LSTM language models use 200 context words on average (Khandelwal et al., 2018), indicating room for further improvement. On the other hand, the direct connections between long-distance word pairs baked in attention mechanisms might ease optimization and enable the learning of long-term dependency (Bahdanau et al., 2014; Vaswani et al., 2017). Recently, Al-Rfou et al. (2018) designed a set of auxiliary losses to train deep Transformer networks for character-level language modeling, which outperform LSTMs by a large margin. Despite the success, the LM training in Al-Rfou et al. (2018) is performed on separated fixed-length segments of a few hundred characters, without any information flow across segments. As a consequence of the fixed context length, the model cannot capture any longer-term dependency beyond the predefined context length. In addition, the fixed-length segments are created by selecting a consecutive chunk of symbols without respecting the sentence or any other semantic boundary. Hence, the model lacks necessary contextual information needed to well predict the first few symbols, leading to inefficient optimization and inferior performance. We refer to this problem as context fragmentation. To address the aforementioned limitations of fixed-length contexts, we propose a new architecture called Transformer-XL (meaning extra long). We introduce the notion of recurrence into our 2979 deep self-attention network. In particular, instead of computing the hidden states from scratch for each new segment, we reuse the hidden states obtained in previous segments. The reused hidden states serve as memory for the current segment, which builds up a recurrent connection between the segments. As a result, modeling very longterm dependency becomes possible because information can be propagated through the recurrent connections. Meanwhile, passing information from the previous segment can also resolve the problem of context fragmentation. More importantly, we show the necessity of using relative positional encodings rather than absolute ones, in order to enable state reuse without causing temporal confusion. Hence, as an additional technical contribution, we introduce a simple but more effective relative positional encoding formulation that generalizes to attention lengths longer than the one observed during training. Transformer-XL obtained strong results on five datasets, varying from word-level to characterlevel language modeling. Transformer-XL is also able to generate relatively coherent long text articles with thousands of tokens (see Appendix E), trained on only 100M tokens. Our main technical contributions include introducing the notion of recurrence in a purely selfattentive model and deriving a novel positional encoding scheme. These two techniques form a complete set of solutions, as any one of them alone does not address the issue of fixed-length contexts. Transformer-XL is the first self-attention model that achieves substantially better results than RNNs on both character-level and word-level language modeling. 2 Related Work In the last few years, the field of language modeling has witnessed many significant advances, including but not limited to devising novel architectures to better encode the context (Bengio et al., 2003; Mikolov et al., 2010; Merity et al., 2016; Al-Rfou et al., 2018), improving regularization and optimization algorithms (Gal and Ghahramani, 2016) , speeding up the Softmax computation (Grave et al., 2016a) , and enriching the output distribution family (Yang et al., 2017). To capture the long-range context in language modeling, a line of work directly feeds a representation of the wider context into the network as an additional input. Existing works range from ones where context representations are manually defined (Mikolov and Zweig, 2012; Ji et al., 2015; Wang and Cho, 2015) to others that rely on document-level topics learned from data (Dieng et al., 2016; Wang et al., 2017). More broadly, in generic sequence modeling, how to capture long-term dependency has been a long-standing research problem. From this perspective, since the ubiquitous adaption of LSTM, many efforts have been spent on relieving the vanishing gradient problem, including better initialization (Le et al., 2015), additional loss signal (Trinh et al., 2018), augmented memory structure (Ke et al., 2018) and others that modify the internal architecture of RNNs to ease the optimization (Wu et al., 2016; Li et al., 2018). Different from them, our work is based on the Transformer architecture and shows that language modeling as a real-world task benefits from the ability to learn longer-term dependency. 3 Model Given a corpus of tokens x = (x1, . . . , xT ), the task of language modeling is to estimate the joint probability P(x), which is often auto-regressively factorized as P(x) = Q t P(xt | x<t). With the factorization, the problem reduces to estimating each conditional factor. In this work, we stick to the standard neural approach to modeling the conditional probability. Specifically, a trainable neural network is used to encode the context x<t into a fixed size hidden state, which is multiplied with the word embeddings to obtain the logits. The logits are then fed into the Softmax function, yielding a categorical probability distribution over the next token. 3.1 Vanilla Transformer Language Models In order to apply Transformer or self-attention to language modeling, the central problem is how to train a Transformer to effectively encode an arbitrarily long context into a fixed size representation. Given infinite memory and computation, a simple solution would be to process the entire context sequence using an unconditional Transformer decoder, similar to a feed-forward neural network. However, this is usually infeasible with the limited resource in practice. One feasible but crude approximation is to split the entire corpus into shorter segments of man2980 Segment 1 x1 x2 x4 x3 Segment 2 x8 x5 x6 x7 (a) Train phase. Limited Context x1 x2 x4 x3 x5 x6 Limited Context x2 x3 x5 x4 x6 x1 Limited Context x3 x4 x6 x5 x2 x1 (b) Evaluation phase. Figure 1: Illustration of the vanilla model with a segment length 4. ageable sizes, and only train the model within each segment, ignoring all contextual information from previous segments. This is the idea adopted by Al-Rfou et al. (2018). We call it the vanilla model and visualize it in Fig. 1a. Under this training paradigm, information never flows across segments in either the forward or backward pass. There are two critical limitations of using a fixedlength context. First, the largest possible dependency length is upper bounded by the segment length, which is a few hundred on character-level language modeling (Al-Rfou et al., 2018). Therefore, although the self-attention mechanism is less affected by the vanishing gradient problem compared to RNNs, the vanilla model is not able to fully exploit this optimization advantage. Second, though it is possible to use padding to respect the sentence or other semantic boundaries, in practice it has been standard practice to simply chunk long text into fixed-length segments due to improved efficiency (Peters et al., 2018; Devlin et al., 2018; Al-Rfou et al., 2018). However, simply chunking a sequence into fixed-length segments will lead to the context fragmentation problem as discussed in Section 1. During evaluation, at each step, the vanilla model also consumes a segment of the same length as in training, but only makes one prediction at the last position. Then, at the next step, the segment is shifted to the right by only one position, and the new segment has to be processed all from scratch. As shown in Fig. 1b, this procedure ensures that each prediction utilizes the longest possible context exposed during training, and also relieves context fragmentation issue encountered in training. However, this evaluation procedure is extremely expensive. We will show that our proposed architecture is able to substantially improve the evaluation speed. 3.2 Segment-Level Recurrence with State Reuse To address the limitations of using a fixed-length context, we propose to introduce a recurrence mechanism to the Transformer architecture. During training, the hidden state sequence computed for the previous segment is fixed and cached to be reused as an extended context when the model processes the next new segment, as shown in Fig. 2a. Although the gradient still remains within a segment, this additional input allows the network to exploit information in the history, leading to an ability of modeling longer-term dependency and avoiding context fragmentation. Formally, let the two consecutive segments of length L be s⌧= [x⌧,1, · · · , x⌧,L] and s⌧+1 = [x⌧+1,1, · · · , x⌧+1,L] respectively. Denoting the n-th layer hidden state sequence produced for the ⌧-th segment s⌧by hn ⌧2 RL⇥d, where d is the hidden dimension. Then, the n-th layer hidden state for segment s⌧+1 is produced (schematically) as follows, ehn−1 ⌧+1 = ⇥ SG(hn−1 ⌧ ) ◦hn−1 ⌧+1 ⇤ , qn ⌧+1, kn ⌧+1, vn ⌧+1 = hn−1 ⌧+1W> q , ehn−1 ⌧+1W> k , ehn−1 ⌧+1W> v , hn ⌧+1 = Transformer-Layer (qn ⌧+1, kn ⌧+1, vn ⌧+1) . where the function SG(·) stands for stop-gradient, the notation [hu ◦hv] indicates the concatenation of two hidden sequences along the length dimension, and W· denotes model parameters. Compared to the standard Transformer, the critical difference lies in that the key kn ⌧+1 and value vn ⌧+1 are conditioned on the extended context ehn−1 ⌧+1 and hence hn−1 ⌧ cached from the previous segment. We emphasize this particular design by the green paths in Fig. 2a. With this recurrence mechanism applied to every two consecutive segments of a corpus, it essentially creates a segment-level recurrence in the hidden states. As a result, the effective context being utilized can go way beyond just two segments. However, notice that the recurrent dependency between hn ⌧+1 and hn−1 ⌧ shifts one layer downwards 2981 x1 x2 x4 x3 x8 x5 x6 x7 New Segment x12 x9 x10 x11 Fixed (No Grad) x1 x2 x4 x3 x8 x5 x6 x7 Fixed (No Grad) New Segment (a) Training phase. x1 x2 x4 x3 x8 x5 x6 x7 x12 x9 x10 x11 Extended Context (b) Evaluation phase. Figure 2: Illustration of the Transformer-XL model with a segment length 4. per-segment, which differs from the same-layer recurrence in conventional RNN-LMs. Consequently, the largest possible dependency length grows linearly w.r.t. the number of layers as well as the segment length, i.e., O(N ⇥L), as visualized by the shaded area in Fig. 2b. This is analogous to truncated BPTT (Mikolov et al., 2010), a technique developed for training RNNLMs. However, different from truncated BPTT, our method caches a sequence of hidden states instead of the last one, and should be applied together with the relative positional encoding technique described in Section 3.3. Besides achieving extra long context and resolving fragmentation, another benefit that comes with the recurrence scheme is significantly faster evaluation. Specifically, during evaluation, the representations from the previous segments can be reused instead of being computed from scratch as in the case of the vanilla model. In our experiments on enwiki8, Transformer-XL is up to 1,800+ times faster than the vanilla model during evaluation (see Section 4). Finally, notice that the recurrence scheme does not need to be restricted to only the previous segment. In theory, we can cache as many previous segments as the GPU memory allows, and reuse all of them as the extra context when processing the current segment. Thus, we can cache a predefined length-M old hidden states spanning (possibly) multiple segments, and refer to them as the memory mn ⌧2 RM⇥d, due to a clear connection to the memory augmented neural networks (Graves et al., 2014; Weston et al., 2014). In our experiments, we set M equal to the segment length during training, and increase it by multiple times during evaluation. 3.3 Relative Positional Encodings While we found the idea presented in the previous subsection very appealing, there is a crucial technical challenge we haven’t solved in order to reuse the hidden states. That is, how can we keep the positional information coherent when we reuse the states? Recall that, in the standard Transformer, the information of sequence order is provided by a set of positional encodings, denoted as U 2 RLmax⇥d, where the i-th row Ui corresponds to the i-th absolute position within a segment and Lmax prescribes the maximum possible length to be modeled. Then, the actual input to the Transformer is the element-wise addition of the word embeddings and the positional encodings. If we simply adapt this positional encoding to our recurrence mechanism, the hidden state sequence would be computed schematically by h⌧+1 = f(h⌧, Es⌧+1 + U1:L) h⌧= f(h⌧−1, Es⌧+ U1:L), where Es⌧2 RL⇥d is the word embedding sequence of s⌧, and f represents a transformation function. Notice that, both Es⌧and Es⌧+1 are associated with the same positional encoding U1:L. As a result, the model has no information to distinguish the positional difference between x⌧,j and x⌧+1,j for any j = 1, . . . , L, resulting in a sheer performance loss. In order to avoid this failure mode, the fundamental idea is to only encode the relative positional information in the hidden states. Conceptually, the positional encoding gives the model a temporal clue or “bias” about how information should be gathered, i.e., where to attend. For the same purpose, instead of incorporating bias statically into the initial embedding, one can inject the same information into the attention score of each layer. More importantly, it is more intuitive and generalizable to define the temporal bias in a relative manner. For instance, when a query vector q⌧,i attends on the key vectors k⌧,i, it does not need to know the absolute position of each key vector to identify the temporal order of the segment. Instead, it suffices to know the relative distance between each key vector k⌧,j and itself q⌧,i, i.e. i−j. Practically, one can create a set of relative posi2982 tional encodings R 2 RLmax⇥d, where the i-th row Ri indicates a relative distance of i between two positions. By injecting the relative distance dynamically into the attention score, the query vector can easily distinguish the representations of x⌧,j and x⌧+1,j from their different distances, making the state reuse mechanism feasible. Meanwhile, we won’t lose any temporal information, as the absolute position can be recovered recursively from relative distances. Previously, the idea of relative positional encodings has been explored in the context of machine translation (Shaw et al., 2018) and music generation (Huang et al., 2018). Here, we offer a different derivation, arriving at a new form of relative positional encodings, which not only has a one-to-one correspondence to its absolute counterpart but also enjoys much better generalization empirically (see Section 4). Firstly, in the standard Transformer (Vaswani et al., 2017), the attention score between query qi and key vector kj within the same segment can be decomposed as Aabs i,j = E> xiW> q WkExj | {z } (a) + E> xiW> q WkUj | {z } (b) + U> i W> q WkExj | {z } (c) + U> i W> q WkUj | {z } (d) . Following the idea of only relying on relative positional information, we propose to reparameterize the four terms as follows Arel i,j = E> xiW> q Wk,EExj | {z } (a) + E> xiW> q Wk,RRi−j | {z } (b) + u>Wk,EExj | {z } (c) + v>Wk,RRi−j | {z } (d) . • The first change we make is to replace all appearances of the absolute positional embedding Uj for computing key vectors in term (b) and (d) with its relative counterpart Ri−j. This essentially reflects the prior that only the relative distance matters for where to attend. Note that R is a sinusoid encoding matrix (Vaswani et al., 2017) without learnable parameters. • Secondly, we introduce a trainable parameter u 2 Rd to replace the query U> i W> q in term (c). In this case, since the query vector is the same for all query positions, it suggests that the attentive bias towards different words should remain the same regardless of the query position. With a similar reasoning, a trainable parameter v 2 Rd is added to substitute U> i W> q in term (d). • Finally, we deliberately separate the two weight matrices Wk,E and Wk,R for producing the content-based key vectors and location-based key vectors respectively. Under the new parameterization, each term has an intuitive meaning: term (a) represents contentbased addressing, term (b) captures a contentdependent positional bias, term (c) governs a global content bias, and (d) encodes a global positional bias. In comparison, the formulation in Shaw et al. (2018) only has terms (a) and (b), dropping the two bias terms (c) and (d). Moreover, Shaw et al. (2018) merge the multiplication WkR into a single trainable matrix ˆR, which abandons the inductive bias built into the original sinusoid positional encoding (Vaswani et al., 2017). In contrast, our relative positional embedding R adapts the sinusoid formulation. As a benefit of the inductive bias, a model trained on a memory of some certain length can automatically generalize to a memory several times longer during evaluation. Equipping the recurrence mechanism with our proposed relative positional embedding, we finally arrive at the Transformer-XL architecture. For completeness, we summarize the computational procedure for a N-layer Transformer-XL with a single attention head here. For n = 1, . . . , N: ehn−1 ⌧ = ⇥ SG(mn−1 ⌧ ) ◦hn−1 ⌧ ⇤ qn ⌧, kn ⌧, vn ⌧= hn−1 ⌧ Wn q >, ehn−1 ⌧ Wn k,E >, ehn−1 ⌧ Wn v > An ⌧,i,j = qn ⌧,i >kn ⌧,j + qn ⌧,i >Wn k,RRi−j + u>k⌧,j + v>Wn k,RRi−j an ⌧= Masked-Softmax(An ⌧)vn ⌧ on ⌧= LayerNorm(Linear(an ⌧) + hn−1 ⌧ ) hn ⌧= Positionwise-Feed-Forward(on ⌧) with h0 ⌧ := Es⌧defined as the word embedding sequence. In addition, it is worth mentioning that a naive way to compute A requires computing Wn k,RRi−j for all pairs (i, j), whose cost is quadratic w.r.t. the sequence length. However, noticing that the value of i −j only ranges from zero to the sequence length, we show a simple computation procedure in Appendix B, which reduces the cost to be linear w.r.t. the sequence length. 4 Experiments 4.1 Main Results We apply Transformer-XL to a variety of datasets on both word-level and character-level language 2983 Model #Param PPL Grave et al. (2016b) - LSTM 48.7 Bai et al. (2018) - TCN 45.2 Dauphin et al. (2016) - GCNN-8 44.9 Grave et al. (2016b) - Neural cache 40.8 Dauphin et al. (2016) - GCNN-14 37.2 Merity et al. (2018) - QRNN 151M 33.0 Rae et al. (2018) - Hebbian + Cache 29.9 Ours - Transformer-XL Standard 151M 24.0 Baevski and Auli (2018) - Adaptive Input⇧ 247M 20.5 Ours - Transformer-XL Large 257M 18.3 Table 1: Comparison with state-of-the-art results on WikiText-103. ⇧indicates contemporary work. Model #Param bpc Ha et al. (2016) - LN HyperNetworks 27M 1.34 Chung et al. (2016) - LN HM-LSTM 35M 1.32 Zilly et al. (2016) - RHN 46M 1.27 Mujika et al. (2017) - FS-LSTM-4 47M 1.25 Krause et al. (2016) - Large mLSTM 46M 1.24 Knol (2017) - cmix v13 1.23 Al-Rfou et al. (2018) - 12L Transformer 44M 1.11 Ours - 12L Transformer-XL 41M 1.06 Al-Rfou et al. (2018) - 64L Transformer 235M 1.06 Ours - 18L Transformer-XL 88M 1.03 Ours - 24L Transformer-XL 277M 0.99 Table 2: Comparison with state-of-the-art results on enwik8. modeling to have a comparison with state-of-theart systems, including WikiText-103 (Merity et al., 2016), enwik8 (LLC, 2009), text8 (LLC, 2009), One Billion Word (Chelba et al., 2013), and Penn Treebank (Mikolov and Zweig, 2012). WikiText-103 is the largest available word-level language modeling benchmark with long-term dependency. It contains 103M training tokens from 28K articles, with an average length of 3.6K tokens per article, which allows testing the ability of long-term dependency modeling. We set the attention length to 384 during training and 1600 during evaluation. We adopted adaptive softmax and input representations (Baevski and Auli, 2018; Grave et al., 2016a). As shown in Table 1, Transformer-XL reduces the previous state-of-theart (SoTA) perplexity from 20.5 to 18.3, which demonstrates the superiority of the TransformerXL architecture. The dataset enwik8 contains 100M bytes of unprocessed Wikipedia text. We compare our architecture with the previous results in Table 2. Under the model size constraint, the 12-layer Transformer-XL achieves a new SoTA result, outModel #Param bpc Cooijmans et al. (2016) - BN-LSTM 1.36 Chung et al. (2016) - LN HM-LSTM 35M 1.29 Zilly et al. (2016) - RHN 45M 1.27 Krause et al. (2016) - Large mLSTM 45M 1.27 Al-Rfou et al. (2018) - 12L Transformer 44M 1.18 Al-Rfou et al. (2018) - 64L Transformer 235M 1.13 Ours - 24L Transformer-XL 277M 1.08 Table 3: Comparison with state-of-the-art results on text8. Model #Param PPL Shazeer et al. (2014) - Sparse Non-Negative 33B 52.9 Chelba et al. (2013) - RNN-1024 + 9 Gram 20B 51.3 Kuchaiev and Ginsburg (2017) - G-LSTM-2 36.0 Dauphin et al. (2016) - GCNN-14 bottleneck 31.9 Jozefowicz et al. (2016) - LSTM 1.8B 30.6 Jozefowicz et al. (2016) - LSTM + CNN 1.04B 30.0 Shazeer et al. (2017) - Low-Budget MoE ⇠5B 34.1 Shazeer et al. (2017) - High-Budget MoE ⇠5B 28.0 Shazeer et al. (2018) - Mesh Tensorflow 4.9B 24.0 Baevski and Auli (2018) - Adaptive Input⇧ 0.46B 24.1 Baevski and Auli (2018) - Adaptive Input⇧ 1.0B 23.7 Ours - Transformer-XL Base 0.46B 23.5 Ours - Transformer-XL Large 0.8B 21.8 Table 4: Comparison with state-of-the-art results on One Billion Word. ⇧indicates contemporary work. performing the 12-layer vanilla Transformer from Al-Rfou et al. (2018) by 0.05, while both Transformer variants have a large margin over conventional RNN-based models. Notably, our 12-layer architecture achieves the same result as the 64layer network from Al-Rfou et al. (2018), using only 17% of the parameter budget. In order to see whether better performances can be obtained by increasing the model size, we train 18-layer and 24-layer Transformer-XLs with increased model sizes. With the attention length 784 during training and 3,800 during evaluation, we obtained a new SoTA result and our method is the first to break through 1.0 on widely-studied characterlevel benchmarks. Different from Al-Rfou et al. (2018), Transformer-XL does not need any auxiliary losses, and thus all benefits are credited to a better architecture. Similar to but different from enwik8, text8 contains 100M processed Wikipedia characters created by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on enwik8 to text8 without further tuning. The compari2984 Model #Param PPL Inan et al. (2016) - Tied Variational LSTM 24M 73.2 Zilly et al. (2016) - Variational RHN 23M 65.4 Zoph and Le (2016) - NAS Cell 25M 64.0 Merity et al. (2017) - AWD-LSTM 24M 58.8 Pham et al. (2018) - Efficient NAS 24M 58.6 Liu et al. (2018) - Differentiable NAS 23M 56.1 Yang et al. (2017) - AWD-LSTM-MoS 22M 55.97 Melis et al. (2018) - Dropout tuning 24M 55.3 Ours - Transformer-XL 24M 54.52 Merity et al. (2017) - AWD-LSTM+Finetune† 24M 57.3 Yang et al. (2017) - MoS+Finetune† 22M 54.44 Table 5: Comparison with state-of-the-art results on Penn Treebank. † indicates using two-step finetuning. son with previous methods is summarized in Table 3. Again, Transformer-XL achieves the new SoTA result with a clear margin. One Billion Word does not preserve any longterm dependency because sentences have been shuffled. Consequently, this dataset mainly tests the ability of modeling only short-term dependency. The comparison between Transformer-XL and the other methods is shown in Table 4. Although Transformer-XL is mainly designed to better capture longer-term dependency, it dramatically improves the single-model SoTA from 23.7 to 21.8. Specifically, Transformer-XL significantly outperforms a contemporary method using vanilla Transformers (Baevski and Auli, 2018), suggesting the advantage of Transformer-XL is generalizable to modeling short sequences. We also report the results on word-level Penn Treebank in Table 5. Similar to AWD-LSTM (Merity et al., 2017), we apply variational dropout and weight average to Transformer-XL. With proper regularization, Transformer-XL achieves a new SoTA result among models without two-step finetuning. Penn Treebank has only 1M training tokens, which implies that Transformer-XL also generalizes well even on small datasets. 4.2 Ablation Study We conduct two sets of ablation studies to examine the effects of two proposed techniques used in Transformer-XL: the recurrence mechanism and the new positional encoding scheme. The first study is performed on WikiText-103, which requires modeling long-term dependency. The results are reported in Table 6. Among the compared encoding schemes, Shaw et al. (2018) is relative, while Vaswani et al. (2017) and Al-Rfou et al. (2018) are absolute. “Full” and “half” losses refer to applying a cross entropy loss to all or the recent half positions in the segment. We found that absolute encodings only work well with half losses because half losses exclude positions with very short attention lengths during training for better generalization. Table 6 shows that both the recurrence mechanism and our encoding scheme are necessary to achieve the best performance, as well as generalizing to longer attention sequences during evaluation time. Although the backpropagation length during training is only 128, with the two techniques the attention length can be increased to 640 at test time. In the standard setting with 151M parameters, the perplexity decreases as the attention length increases. Since the recurrence mechanism costs additional memory, we also compare Transformer-XL with baselines under the same GPU memory constraints. As shown in Table 10 in Appendix A, despite using a shorter backpropagation length, Transformer-XL remains superior to the baselines. The second study targets at isolating the effects of resolving the context fragmentation problem from the benefit of capturing longer context length. In order to achieve this goal, we deliberately choose a dataset that does not require longterm dependency, so that any improvement from establishing the recurrence can be attributed to solving the context fragmentation. Specifically, we perform this controlled experiment on the One Billion Word dataset, which can only benefit from removing the context fragmentation. We train a 20-layer Transformer-XL with ⇠0.3B parameters for 400K steps. As shown in Table 7, using segment-level recurrence substantially improves performance even when long-term dependency is not needed, which is consistent with our previous discussion that the recurrence mechanism resolves the context fragmentation problem. Moreover, our relative positional encodings is also superior to Shaw et al. (2018) on short sequences. 4.3 Relative Effective Context Length Khandelwal et al. (2018) proposed a method to evaluate the Effective Context Length (ECL) of a sequence model. ECL is the longest length to which increasing the context span would lead to a gain more than a threshold. However, ECL ignores the fact that it is harder to get improvement when a model already achieves a lower per2985 Remark Recurrence Encoding Loss PPL init PPL best Attn Len Transformer-XL (128M) 3 Ours Full 27.02 26.77 500 3 Shaw et al. (2018) Full 27.94 27.94 256 3 Ours Half 28.69 28.33 460 7 Ours Full 29.59 29.02 260 7 Ours Half 30.10 30.10 120 7 Shaw et al. (2018) Full 29.75 29.75 120 7 Shaw et al. (2018) Half 30.50 30.50 120 7 Vaswani et al. (2017) Half 30.97 30.97 120 Transformer (128M)† 7 Al-Rfou et al. (2018) Half 31.16 31.16 120 Transformer-XL (151M) 3 Ours Full 23.43 23.09 640 23.16 450 23.35 300 Table 6: Ablation study on WikiText-103. For the first two blocks, we use a slightly smaller model (128M parameters). † indicates that the corresponding row is reduced to the same setting as the Transformer network in (Al-Rfou et al., 2018), except that two auxiliary losses are not implemented in our experiments. “PPL init” refers to using the same length as training. “PPL best” indicates the perplexity obtained by using the optimal length. “Attn Len” is the shortest possible attention length during evaluation to achieve the corresponding result (PPL best). Increasing the attention length during evaluation improves performance only when our positional encoding is used. The “Transformer-XL (151M)” setting uses a standard parameter budget as previous work (Merity et al., 2018), where we observe a similar effect when increasing the attention length during evaluation. Method PPL Ours 25.2 With Shaw et al. (2018) encodings 25.7 Without recurrence 27.1 Table 7: Ablation study on One Billion Word, a dataset without long-term dependency. Model r = 0.1 r = 0.5 r = 1.0 Transformer-XL 151M 900 800 700 QRNN 500 400 300 LSTM 400 300 200 Transformer-XL 128M 700 600 500 - use Shaw et al. (2018) encoding 400 400 300 - remove recurrence 300 300 300 Transformer 128 128 128 Table 8: Relative effective context length (RECL) comparison. See text for the definition of RECL and r. The first three models and the last four models are compared as two model groups when we calculate RECL (RECL is computed on a model group rather than a single model). Each group has the same parameter budget. plexity using only a shorter context, and thus it is not suitable for fair comparison among multiple models. We instead propose a new metric called Relative Effective Context Length (RECL). RECL is defined on a model group instead of a single model, and the gain of a long context is measure by the relative improvement over the best short context model. As such, the model group shares the same baseline to enable fair comparison. RECL also has a parameter r, which means constraining the comparison on top-r hard examples. See Appedix C for more details about RECL. As shown in Table 8, Transformer-XL manages to model dependency of 900 words long on average with r = 0.1. The RECL of TransformerXL is 80% and 450% longer than recurrent networks and Transformer respectively. Both the recurrence mechanism and our positional encodings contribute to a longer RECL. This further substantiates our argument that Transformer-XL is able to model longer-term dependency. 4.4 Generated Text Trained only on WikiText-103 which is mediumsized, Transformer-XL is already able to generate relatively coherent articles with thousands of tokens without manual cherry picking, despite minor flaws. Please refer to Appendix E for samples. 4.5 Evaluation Speed Finally, we compare the evaluation speed of our model with the vanilla Transformer model (AlRfou et al., 2018). As shown in Table 9, due to the state reuse scheme, Transformer-XL achieves an up to 1,874 times speedup during evaluation. 5 Conclusions Transformer-XL obtains strong perplexity results, models longer-term dependency than RNNs and Transformer, achieves substantial speedup during 2986 Attn Len How much Al-Rfou et al. (2018) is slower 3,800 1,874x 2,800 1,409x 1,800 773x 800 363x Table 9: Slowdown in terms of running time during evaluation. Evaluation is based on per-token time on one GPU. evaluation, and is able to generate coherent text articles. We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling. Acknowledgments ZD and YY were supported in part by National Science Foundation (NSF) under the grant IIS1546329 and by the DOE-Office of Science under the grant ASCR #KJ040201. ZY and RS were supported in part by the Office of Naval Research grant N000141812861, the NSF grant IIS1763562, the Nvidia fellowship, and the Siebel scholarship. References Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2018. Character-level language modeling with deeper self-attention. arXiv preprint arXiv:1808.04444. Alexei Baevski and Michael Auli. 2018. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Shaojie Bai, J Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704. Tim Cooijmans, Nicolas Ballas, César Laurent, Ça˘glar Gülçehre, and Aaron Courville. 2016. Recurrent batch normalization. arXiv preprint arXiv:1603.09025. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated convolutional networks. arXiv preprint arXiv:1612.08083. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Adji B Dieng, Chong Wang, Jianfeng Gao, and John Paisley. 2016. Topicrnn: A recurrent neural network with long-range semantic dependency. arXiv preprint arXiv:1611.01702. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019–1027. Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. 2016a. Efficient softmax approximation for gpus. arXiv preprint arXiv:1609.04309. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016b. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. David Ha, Andrew Dai, and Quoc V Le. 2016. Hypernetworks. arXiv preprint arXiv:1609.09106. Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, Jürgen Schmidhuber, et al. 2001. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, Andrew M Dai, Matthew D Hoffman, and Douglas Eck. 2987 2018. An improved relative self-attention mechanism for transformer with application to music generation. arXiv preprint arXiv:1809.04281. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462. Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2015. Document context language models. arXiv preprint arXiv:1511.03962. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. Nan Rosemary Ke, Anirudh Goyal ALIAS PARTH GOYAL, Olexa Bilaniuk, Jonathan Binas, Michael C Mozer, Chris Pal, and Yoshua Bengio. 2018. Sparse attentive backtracking: Temporal credit assignment through reminding. In Advances in Neural Information Processing Systems, pages 7650–7661. Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. arXiv preprint arXiv:1805.04623. Bryon Knol. 2017. cmix v13. http://www. byronknoll.com/cmix.html. Ben Krause, Liang Lu, Iain Murray, and Steve Renals. 2016. Multiplicative lstm for sequence modelling. arXiv preprint arXiv:1609.07959. Oleksii Kuchaiev and Boris Ginsburg. 2017. Factorization tricks for lstm networks. arXiv preprint arXiv:1703.10722. Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. 2015. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941. Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. 2018. Independently recurrent neural network (indrnn): Building a longer and deeper rnn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5457–5466. Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2018. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. MultiMedia LLC. 2009. Large text compression benchmark. Gábor Melis, Charles Blundell, Tomáš Koˇcisk`y, Karl Moritz Hermann, Chris Dyer, and Phil Blunsom. 2018. Pushing the bounds of dropout. arXiv preprint arXiv:1805.09208. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. SLT, 12(234-239):8. Asier Mujika, Florian Meier, and Angelika Steger. 2017. Fast-slow recurrent neural networks. In Advances in Neural Information Processing Systems, pages 5915–5924. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. 2018. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Jack W Rae, Chris Dyer, Peter Dayan, and Timothy P Lillicrap. 2018. Fast parametric learning with activation memorization. arXiv preprint arXiv:1803.10049. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, et al. 2018. Mesh-tensorflow: Deep learning for supercomputers. In Advances in Neural Information Processing Systems, pages 10434–10443. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. 2988 Noam Shazeer, Joris Pelemans, and Ciprian Chelba. 2014. Skip-gram language modeling using sparse non-negative matrix probability estimation. arXiv preprint arXiv:1412.1454. Trieu H Trinh, Andrew M Dai, Thang Luong, and Quoc V Le. 2018. Learning longer-term dependencies in rnns with auxiliary losses. arXiv preprint arXiv:1803.00144. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Tian Wang and Kyunghyun Cho. 2015. Largercontext language modelling. arXiv preprint arXiv:1511.03729. Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. 2017. Topic compositional neural language model. arXiv preprint arXiv:1712.09783. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916. Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan R Salakhutdinov. 2016. On multiplicative integration with recurrent neural networks. In Advances in neural information processing systems, pages 2856–2864. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2017. Breaking the softmax bottleneck: A high-rank rnn language model. arXiv preprint arXiv:1711.03953. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. 2016. Recurrent highway networks. arXiv preprint arXiv:1607.03474. Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578.
2019
285
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2989–3001 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 2989 Domain Adaptation of Neural Machine Translation by Lexicon Induction Junjie Hu, Mengzhou Xia, Graham Neubig, Jaime Carbonell Language Technologies Institute School of Computer Science Carnegie Mellon University {junjieh,gneubig,jgc}@cs.cmu.edu, [email protected] Abstract It has been previously noted that neural machine translation (NMT) is very sensitive to domain shift. In this paper, we argue that this is a dual effect of the highly lexicalized nature of NMT, resulting in failure for sentences with large numbers of unknown words, and lack of supervision for domain-specific words. To remedy this problem, we propose an unsupervised adaptation method which finetunes a pre-trained out-of-domain NMT model using a pseudo-in-domain corpus. Specifically, we perform lexicon induction to extract an in-domain lexicon, and construct a pseudo-parallel in-domain corpus by performing word-for-word back-translation of monolingual in-domain target sentences. In five domains over twenty pairwise adaptation settings and two model architectures, our method achieves consistent improvements without using any in-domain parallel sentences, improving up to 14 BLEU over unadapted models, and up to 2 BLEU over strong back-translation baselines. 1 Introduction Neural machine translation (NMT) has demonstrated impressive performance when trained on large-scale corpora (Bojar et al., 2018). However, it has also been noted that NMT models trained on corpora in a particular domain tend to perform poorly when translating sentences in a significantly different domain (Chu and Wang, 2018; Koehn and Knowles, 2017). Previous work in the context of phrase-based statistical machine translation (Daum´e III and Jagarlamudi, 2011) has noted that unseen (OOV) words account for a large portion of translation errors when switching to new domains. However this problem of OOV words in cross-domain transfer is under-examined Code/scripts are released at https://github.com/ junjiehu/dali. in the context of NMT, where both training methods and experimental results will differ greatly. In this paper, we try to fill this gap, examining domain adaptation methods for NMT specifically focusing on correctly translating unknown words. As noted by Chu and Wang (2018), there are two important distinctions to make in adaptation methods for MT. The first is data requirements; supervised adaptation relies on in-domain parallel data, and unsupervised adaptation has no such requirement. There is also a distinction between model-based and data-based methods. Modelbased methods make explicit changes to the model architecture such as jointly learning domain discrimination and translation (Britz et al., 2017), interpolation of language modeling and translation (Gulcehre et al., 2015; Domhan and Hieber, 2017), and domain control by adding tags and word features (Kobus et al., 2017). On the other hand, data-based methods perform adaptation either by combining in-domain and out-of-domain parallel corpora for supervised adaptation (Luong and Manning, 2015; Freitag and Al-Onaizan, 2016) or by generating pseudo-parallel corpora from indomain monolingual data for unsupervised adaptation (Sennrich et al., 2016a; Currey et al., 2017). Specifically, in this paper we tackle the task of data-based, unsupervised adaptation, where representative methods include creation of a pseudoparallel corpus by back-translation of in-domain monolingual target sentences (Sennrich et al., 2016a), or construction of a pseudo-parallel indomain corpus by copying monolingual target sentences to the source side (Currey et al., 2017). However, while these methods have potential to strengthen the target-language decoder through addition of in-domain target data, they do not explicitly provide direct supervision of domainspecific words, which we argue is one of the major difficulties caused by domain shift. To remedy this problem, we propose a new 2990 Out-of-domain Parallel Corpus GIZA++ Alignment In-domain Unaligned Corpus Supervised Seed Lexicon Volagen: styles Nets: web … Unsupervised Seed Lexicon therapie: therapy müdigkeit: tiredness … # $ !∗ Induction GAN ' In-domain Target Corpus Pseudo-in-domain Source Corpus Figure 1: Work flow of domain adaptation by lexicon induction (DALI). data-based method for unsupervised adaptation that specifically focuses the unknown word problem: domain adaptation by lexicon induction (DALI). Our proposed method leverages large amounts of monolingual data to find translations of in-domain unseen words, and constructs a pseudo-parallel in-domain corpus via word-forword back-translation of monolingual in-domain target sentences into source sentences. More specifically, we leverage existing supervised (Xing et al., 2015) and unsupervised (Conneau et al., 2018) lexicon induction methods that project source word embeddings to the target embedding space, and find translations of unseen words by their nearest neighbors. For supervised lexicon induction, we learn such a mapping function under the supervision of a seed lexicon extracted from out-of-domain parallel sentences using word alignment. For unsupervised lexicon induction, we follow Conneau et al. (2018) to infer a lexicon by adversarial training and iterative refinement. In the experiments on German-to-English translation across five domains (Medical, IT, Law, Subtitles, and Koran), we find that DALI improves both RNN-based (Bahdanau et al., 2015) and Transformer-based (Vaswani et al., 2017) models trained on an out-of-domain corpus with gains as high as 14 BLEU. When the proposed method is combined with back-translation, we can further improve performance by up to 4 BLEU. Further analysis shows that the areas in which gains are observed are largely orthogonal to backtranslation; our method is effective in translating in-domain unseen words, while back-translation mainly improves the fluency of source sentences, which helps the training of the NMT decoder. 2 Domain Adaptation by Lexicon Induction Our method works in two steps: (1) we use lexicon induction methods to learn an in-domain lexicon from in-domain monolingual source data Dsrc-in and target data Dtgt-in as well as out-of-domain parallel data Dparallel-out, (2) we use this lexicon to create a pseudo-parallel corpus for MT. 2.1 Lexicon Induction Given separate source and target word embeddings, X, Y ∈Rd×N, trained on all available monolingual source and target sentences across all domains, we leverage existing lexicon induction methods that perform supervised (Xing et al., 2015) or unsupervised (Conneau et al., 2018) learning of a mapping f(X) = WX that transforms source embeddings to the target space, then selects nearest neighbors in embedding space to extract translation lexicons. Supervised Embedding Mapping Supervised learning of the mapping function requires a seed lexicon of size n, denoted as L = {(s, t)i}n i=1. We represent the source and target word embeddings of the i-th translation pair (s, t)i by the ith column vectors of X(n), Y(n) ∈Rd×n respectively. Xing et al. (2015) show that by enforcing an orthogonality constraint on W ∈Od(R), we can obtain a closed-form solution from a singular value decomposition (SVD) of Y(n)X(n)T : W∗= arg max W∈Od(R) ∥Y(n) −WX(n)∥F = UVT UΣVT = SVD(Y(n)X(n)T ). (1) In a domain adaptation setting we have parallel out-of-domain data Dparallel-out, which can be used to extract a seed lexicon. Algorithm 1 shows the procedure of extracting this lexicon. We use the word alignment toolkit GIZA++ (Och and Ney, 2003) to extract word translation probabilities P(t|s) and P(s|t) in both forward and backward directions from Dparallel-out, and extract lexicons Lfw = {(s, t), ∀P(t|s) > 0} and Lbw = 2991 Algorithm 1 Supervised lexicon extraction Input: Parallel out-of-domain data Dparallel-out Output: Seed lexicon L = {(s, t)}n i=1 1: Run GIZA++ on Dparallel-out to get Lfw, Lbw 2: Lg = Lfw ∪Lbw 3: Remove pairs with punctuation only in either s and t from Lg 4: Initialize a counter C[(s, t)] = 0 ∀(s, t) ∈Lg 5: for (src, tgt) ∈Dparallel-out do 6: for (s, t) ∈Lg do 7: if s ∈src and t ∈tgt then 8: C[(s, t)] = C[(s, t)] + 1 9: Sort C by its values in the descending order 10: L = {}, S = {}, T = {} 11: for (s, t) ∈C do 12: if s /∈S and t /∈T then 13: L = L ∪{(s, t)} 14: S = S ∪{s}, T = T ∪{t} 15: return L {(s, t), ∀P(s|t) > 0}. We take the union of the lexicons in both directions and further prune out translation pairs containing punctuation that is non-identical. To avoid multiple translations of either a source or target word, we find the most common translation pairs in Dparallel-out, sorting translation pairs by the number of times they occur in Dparallel-out in descending order, and keeping those pairs with highest frequency in Dparallel-out. Unsupervised Embedding Mapping For unsupervised training, we follow Conneau et al. (2018) in mapping source word embeddings to the target word embedding space through adversarial training. Details can be found in the reference, but briefly a discriminator is trained to distinguish between an embedding sampled from WX and Y, and W is trained to prevent the discriminator from identifying the origin of an embedding by making WX and Y as close as possible. Induction Once we obtain the matrix W either from supervised or unsupervised training, we map all the possible in-domain source words to the target embedding space. We compute the nearest neighbors of an embedding by a distance metric, Cross-Domain Similarity Local Scaling (CSLS; Conneau et al. (2018)): CSLS(Wx, y) = 2 cos(Wx, y) −rT (Wx) −rS(y) rT (Wx) = 1 K X y′∈NT (Wx) cos(Wx, y′) where rT (Wx) and rS(y) measure the average cosine similarity between their K nearest neighbors in the source and target spaces respectively. To ensure the quality of the extracted lexicons, we only consider mutual nearest neighbors, i.e., pairs of words that are mutually nearest neighbors of each other according to CSLS. This significantly decreases the size of the extracted lexicon, but improves the reliability. 2.2 NMT Data Generation and Training Finally, we use this lexicon to create pseudoparallel in-domain data to train NMT models. Specifically, we follow Sennrich et al. (2016a) in back-translating the in-domain monolingual target sentences to the source language, but instead of using a pre-trained target-to-source NMT system, we simply perform word-for-word translation using the induced lexicon L. Each target word in the target side of L can be deterministically backtranslated to a source word, since we take the nearest neighbor of a target word as its translation according to CSLS. If a target word is not mutually nearest to any source word, we cannot find a translation in L and we simply copy this target word to the source side. We find that more than 80% of the words can be translated by the induced lexicons. We denote the constructed pseudo-parallel in-domain corpus as Dpseudo-parallel-in. During training, we first pre-train an NMT system on an out-of-domain parallel corpus Dparallel-out, and then fine tune the NMT model on a constructed parallel corpus. More specifically, to avoid overfitting to the extracted lexicons, we sample an equal number of sentences from Dparallel-out, and get a fixed subset D′ parallel-out, where |D′ parallel-out| = |Dpseudo-parallel-in|. We concatenate D′ parallel-out with Dpseudo-parallel-in, and finetune the NMT model on the combined corpus. 3 Experimental Results 3.1 Data We follow the same setup and train/dev/test splits of Koehn and Knowles (2017), using a Germanto-English parallel corpus that covers five different domains. Data statistics are shown in Table 2. 2992 Domain Method Medical IT Subtitles Law Koran Avg. Gain Medical LSTM Unadapted 46.19 4.62 2.54 7.05 1.25 3.87 +4.31 DALI 11.32 7.79 9.72 3.85 8.17 XFMR Unadapted 49.66 4.54 2.39 7.77 0.93 3.91 +4.79 DALI 10.99 8.25 11.32 4.22 8.70 IT LSTM Unadapted 7.43 57.79 5.49 4.10 2.52 4.89 +5.98 DALI 20.44 9.53 8.63 4.85 10.86 XFMR Unadapted 6.96 60.43 6.42 4.50 2.45 5.08 +5.76 DALI 19.49 10.49 8.75 4.62 10.84 Subtitles LSTM Unadapted 11.36 12.27 27.29 10.95 10.57 11.29 +2.79 DALI 21.63 12.99 11.50 10.17 16.57 XFMR Unadapted 16.51 14.46 30.71 11.55 12.96 13.87 +3.85 DALI 26.17 17.56 13.96 13.18 17.72 Law LSTM Unadapted 15.91 6.28 4.52 40.52 2.37 7.27 +4.85 DALI 24.57 10.07 9.11 4.72 12.12 XFMR Unadapted 16.35 5.52 4.57 46.59 1.82 7.07 +6.17 DALI 26.98 11.65 9.14 5.15 13.23 Koran LSTM Unadapted 0.63 0.45 2.47 0.67 19.40 1.06 +6.56 DALI 12.90 5.25 7.49 4.80 7.61 XFMR Unadapted 0.00 0.44 2.58 0.29 15.53 0.83 +7.54 DALI 14.27 5.24 9.01 4.94 8.37 Table 1: BLEU scores of LSTM based and Transformer (XFMR) based NMT models when trained on one domain (columns), and tested on another domain (rows). The last two columns show the average performance of unadapted baselines and DALI, and the average gains. Note that these domains are very distant from each other. Following Koehn and Knowles (2017), we process all the data with byte-pair encoding (Sennrich et al., 2016b) to construct a vocabulary of 50K subwords. To build an unaligned monolingual corpus for each domain, we randomly shuffle the parallel corpus and split the corpus into two parts with equal numbers of parallel sentences. We use the target and source sentences of the first and second halves respectively. We combine all the unaligned monolingual source and target sentences on all five domains to train a skip-gram model using fasttext (Bojanowski et al., 2017). We obtain source and target word embeddings in 512 dimensions by running 10 epochs with a context window of 10, and 10 negative samples. Corpus Words Sentences W/S Medical 12,867,326 1,094,667 11.76 IT 2,777,136 333,745 8.32 Subtitles 106,919,386 13,869,396 7.71 Law 15,417,835 707,630 21.80 Koran 9,598,717 478,721 20.05 Table 2: Corpus statistics over five domains. 3.2 Main Results We first compare DALI with other adaptation strategies on both RNN-based and Transformerbased NMT models. Table 1 shows the performance of the two models when trained on one domain (columns) and tested on another domain (rows). We fine-tune the unadapted baselines using pseudo-parallel data created by DALI. We use the unsupervised lexicon here for all settings, and leave a comparison across lexicon creation methods to Table 3. Based on the last two columns in Table 1, DALI substantially improves both NMT models with average gains of 2.79-7.54 BLEU over the unadapted baselines. We further compare DALI with two popular data-based unsupervised adaptation methods that leverage in-domain monolingual target sentences: (1) a method that copies target sentences to the source side (Copy; Currey et al. (2017)) and (2) back-translation (BT; Sennrich et al. (2016a)), which translates target sentences to the source language using a backward NMT model. We compare DALI with supervised (DALI-S) and unsupervised (DALI-U) lexicon induction. Finally, we 2993 Medical Subtitles Law Koran Unadapted 7.43 5.49 4.10 2.52 Copy 13.28 6.68 5.32 3.22 BT 18.51 11.25 11.55 8.18 DALI-U 20.44 9.53 8.63 4.90 DALI-S 19.03 9.80 8.64 4.91 DALI-U+BT 24.34 13.35 13.74 8.11 DALI-GIZA++ 28.39 9.37 11.45 8.09 In-domain 46.19 27.29 40.52 19.40 Table 3: Comparison among different methods on adapting NMT from IT to {Medical, Subtitles, Law, Koran} domains, along with two oracle results (1) experiment with when we directly extract a lexicon from an in-domain corpus using GIZA++ (DALI-GIZA++) and Algorithm 1, and (2) list scores for when systems are trained directly on indomain data (In-domain). For simplicity, we test the adaptation performance of the LSTM-based NMT model, and train a LSTM-based NMT with the same architecture on out-of-domain corpus for English-to-German back-translation. First, DALI is competitive with BT, outperforming it on the medical domain, and underperforming it on the other three domains. Second, the gain from DALI is orthogonal to that from BT – when combining the pseudo-parallel in-domain corpus obtained from DALI-U with that from BT, we can further improve by 2-5 BLEU points on three of four domains. Second, the gains through usage of both DALI-U and DALI-S are surprisingly similar, although the lexicons induced by these two methods have only about 50% overlap. Detailed analysis of two lexicons can be found in Section 3.5. 3.3 Word-level Translation Accuracy Since our proposed method focuses on leveraging word-for-word translation for data augmentation, we analyze the word-for-word translation accuracy for unseen in-domain words. A source word is considered as an unseen in-domain word when it never appears in the out-of-domain corpus. We examine two question: (1) How much does each adaptation method improve the translation accuracy of unseen in-domain words? (2) How does the frequency of the in-domain word affect its translation accuracy? To fairly compare various methods, we use a lexicon extracted from the in-domain parallel data with the GIZA++ alignment toolkit as a reference lexicon Lg. For each unseen in-domain source word in the test file, when the corresponding target IT - Medical 0.0 0.2 0.4 0.6 IT - Law IT - Subtitles 0.0 0.2 0.4 0.6 IT - Koran Unadapted Copy BT DALI-U DALI-S DALI-U+BT Figure 2: Translation accuracy of in-domain words of the test set on several data augmentation baseline and our proposed method with IT as the out domain word in Lg occurs in the output, we consider it as a “hit” for the word pair. First, we compare the percentage of successful in-domain word translations across all adaptation methods. Specifically, we scan the source and reference of the test set to count the number of valid hits C, then scan the output file to get the count Ct in the same way. Finally, the hit percentage is calculated as Ct C . The results on experiments adapting IT to other domains are shown in Figure 2. The hit percentage of the unadapted output is extremely low, which confirms our assumption that in-domain word translation poses a major challenge in adaptation scenarios. We also find that all augmentation methods can improve the translation accuracy of unseen in-domain words but our proposed method can outperform all others in most cases. The unseen in-domain word translation accuracy is quantitatively correlated with the BLEU scores, which shows that correctly translating indomain unseen words is a major factor contributing to the improvements seen by these methods. Second, to investigate the effect of frequency of word-for-word translation, we bucket the unseen in-domain words by their frequency percentile in the pseudo-in-domain training dataset, and calculate calculate the average translation accuracy of unseen in-domain words within each bucket. The results are plotted in Figure 3 in which the x-axis represents each bucket within a range of frequency percentile, and the y-axis represents the average translation accuracy. With increasing frequency of words in the pseudo-in-domain data, the trans2994 (0,20] (20, 40] (40, 60] (60, 80] (80, 100] 0.2 0.4 0.6 0.8 1.0 Kor-S Kor-U Law-S Law-U Med-S Med-U Sub-S Sub-U Figure 3: Translation accuracy of in-domain unseen words in the test set with regards to the frequency percentile of lexicon words inserted in the pseudo-indomain training corpus. lation accuracy also increases, which is consistent with our intuition that the neural network would be able to remember high frequency tokens better. Since the absolute value of the occurrences are different among all domains, the numerical values of accuracy within each bucket vary across domains, but all lines follow the ascending pattern. 3.4 When do Copy, BT and DALI Work? From Figure 2, we can see that Copy, BT and DALI all improve the translation accuracy of indomain unseen words. In this section, we explore exactly what types of words each method improves on. We randomly pick some in-domain unseen word pairs which are translated 100% correctly in the translation outputs of systems trained with each method. We also count these word pairs’ occurrences in the pseudo-in-domain training set. The examples are demonstrated in Table 5. We find that in the case of Copy, over 80% of the successful word translation pairs have the same spelling format for both source and target words, and almost all of the rest of the pairs share subword components. In short, and as expected, Copy excels on improving accuracy of words that have identical forms on the source and target sides. As expected, our proposed method mainly increases the translation accuracy of the pairs in our induced lexicon. It also leverages the subword components to successfully translate compound words. For example, “monotherapie” does not occur in our induced lexicon, but the model is still able to translate it correctly based on its subwords “mono@@” and “therapie” by leveraging the successfully induced pair “therapie” and “therapy”. It is more surprising to find that adding a back translated corpus significantly improves the model’s ability to translate in-domain unseen words correctly, even if the source word never occurs in the pseudo-in-domain corpus. Even more surprisingly, we find that the majority of the correctly translated source words are not segmented at all, which means that the model does not leverage the subword components to make correct translations. In fact, for most of the correctly translated in-domain word pairs, the source words are never seen during training. To further analyze this, we use our BT model to do word-for-word translation for these individual words without any other context, and the results turn out to be extremely bad, indicating that the model does not actually find the correspondence of these word pairs. Rather, it rely solely on the decoder to make the correct translation on the target side for test sentences with related target sentences in the training set. To verify this, Table 4 demonstrates an example extracted from the pseudo-in-domain training set. BT-T shows a monolingual in-domain target sentence and BT-S is the back-translated source sentence. Though the back translation fails to generate any in-domain words and the meaning is unfaithful, it succeeds to generate a similar sentence pattern as the correct source sentence, which is “... ist eine (ein) ... , die (das) ... enth¨alt .”. The model can easily detect the pattern through the attention mechanism and translate the highly related word “medicine” correctly. From the above analysis, it can be seen that the improvement brought by the augmentation of BT and DALI are largely orthogonal. The former utilizes the highly related contexts to translate unseen in-domain words while the latter directly injects reliable word translation pairs to the training corpus. This explains why we get further improvements over either single method alone. 3.5 Lexicon Coverage Intuitively, with a larger lexicon, we would expect a better adaptation performance. In order to examine this hypothesis, we do experiments using pseudo-in-domain training sets generated by our induced lexicon with various coverage levels. Specifically, we split the lexicon into 5 folds randomly and use a portion of it comprising folds 1 through 5, which correspond to 20%, 40%, 60%, 80% and 100% of the original data. We calculate the coverage of the words in the Medical test set comparing with each pseudo-in-domain train2995 BT-S es ist eine Nachricht , die die aktive Substanz enth¨alt . BT-T Invirase is a medicine containing the active substance saquinavir . Test-S ABILIFY ist ein Arzneimittel , das den Wirkstoff Aripiprazol enth¨alt . Test-T Prevenar is a medicine containing the design of Arixtra . Table 4: An example that shows why BT could translate the OOV word “Arzneimittel” correctly into “medicine”. “enth´alt” corresponds to the English word “contain”. Though BT can’t translate a correct source sentence for augmentation, it generates sentences with certain patterns that could be identified by the model, which helps translate in-domain unseen words. Type Word Pair Count Copy (tremor, tremor) 452 (347, 347) 18 BT (ausschuss, committee) 0 (apotheker, pharmacist) 0 (toxizit¨at, toxicity) 0 DALI (m¨udigkeit, tiredness) 444 (therapie, therapy) 9535 (monotherapie, monotherapy) 0 Table 5: 100% successful word translation examples from the output of the IT to Medical adaptation task. The Count column shows the number of occurrences of word pairs in the pseudo-in-domain training set. 0.0 0.5 1.0 IT-Medical 0.78 0.80 0.82 0.84 Word Coverage 0.0 0.5 1.0 IT-Law 0.87 0.88 0.89 14 16 18 20 6 7 8 BLEU Word Coverage BLEU Figure 4: Word coverage and BLEU score of the Medical test set when the pseudo-in-domain training set is constructed with different level of lexicon coverage. ing set. We use each training set to train a model and get its corresponding BLEU score. From Figure 4, we find that the proportion of the used lexicon is highly correlated with both the known word coverage in the test set and its BLEU score, indicating that by inducing a larger and more accurate lexicon, further improvements can likely be made. 3.6 Semi-supervised Adaptation Although we target unsupervised domain adaptation, it is also common to have a limited amount of in-domain parallel sentences in a semi-supervised adaptation setting. To measure efficacy of DALI in this setting, we first pre-train an NMT model on a parallel corpus in the IT domain, and adapt it to the medical domain. The pre-trained NMT obtains 7.43 BLEU scores on the medical test set. During fine-tuning, we sample 330,278 out-ofdomain parallel sentences, and concatenate them with 547,325 pseudo-in-domain sentences generated by DALI and the real in-domain sentences. We also compare the performance of fine-tuning on the combination of the out-of-domain parallel sentences with only real in-domain sentences. We vary the number of real in-domain sentences in the range of [20K, 40K, 80K, 160K, 320K, 480K]. In Figure 5(a), semi-supervised adaptation outperforms unsupervised adaptation after we add more than 20K real in-domain sentences. As the number of real in-domain sentences increases, the BLEU scores on the in-domain test set improve, and finetuning on both the pseudo and real in-domain sentences further improves over fine-tuning sorely on the real in-domain sentences. In other words, given a reasonable number of real in-domain sentences in a common semi-supervised adaptation setting, DALI is still helpful in leveraging a large number of monolingual in-domain sentences. 3.7 Effect of Out-of-Domain Corpus The size of data that we use to train the unadapted NMT and BT NMT models varies from hundreds of thousands to millions, and covers a wide range of popular domains. Nonetheless, the unadapted NMT and BT NMT models can both benefit from training on a large out-of-domain corpus. We examine the question: how does fine-tuning on weak and strong unadapted NMT models affect the adaptation performance? To this end, we compare DALI and BT on adapting from subtitles to medical domains, where the two largest corpus in subtitles and medical domains have 13.9 and 1.3 million sentences. We vary the size of outof-domain corpus in a range of [0.5, 1, 2, 4, 13.9] million, and fix the number of in-domain target sentences to 0.6 million. In Figure 5(b), as the size of out-of-domain parallel sentences increases, 2996 Source ABILIFY ist ein Arzneimittel , das den Wirkstoff Aripiprazol enthlt . BLEU Reference abilify is a medicine containing the active substance aripiprazole . 1.000 Unadapted the time is a figure that corresponds to the formula of a formula . 0.204 Copy abilify is a casular and the raw piprexpression offers . 0.334 BT prevenar is a medicine containing the design of arixtra . 0.524 DALI abilify is a arzneimittel that corresponds to the substance ariprazole . 0.588 DALI+BT abilify is a arzneimittel , which contains the substance aripiprazole . 0.693 Table 6: Translation outputs from various data augmentation method and our method for IT→Medical adaptation. 20K 80K 160K 320K 480K In-Domain Size(Thousand) 18 21 24 27 30 33 36 BLEU semi+DALI-U semi DALI-U (a) IT-Medical 2M 4M 8M 14M Out-of-Domain Size (Million) 0 4 8 12 16 20 24 BLEU Unadapted DALI-U BT BT+DALI-U (b) Subtitles-Medical Figure 5: Effect of training on increasing number of in-domain (a) and out-of-domain (b) parallel sentences we have a stronger upadapted NMT which consistently improves the BLEU score of the in-domain test set. Both DALI and BT also benefit from adapting a stronger NMT model to the new domain. Combining DALI with BT further improves the performance, which again confirms our finding that the gains from DALI and BT are orthogonal to each other. Having a stronger BT model improves the quality of synthetic data, while DALI aims at improving the translation accuracy of OOV words by explicitly injecting their translations. 3.8 Effect of Domain Coverage We further test the adaptation performance of DALI when we train our base NMT model on the WMT14 German-English parallel corpus. The corpus is a combination of Europarl v7, Common Crawl corpus and News Commentary, and consists of 4,520,620 parallel sentences from a wider range of domains. In Table 7, we compare the BLEU scores of the test sets between the unadapted NMT and the adapted NMT using DALI-U. We also show the percentage of source words or subwords in the training corpus of five domains being covered by the WMT14 corpus. Although the unadapted NMT system trained on the WMT14 corpus obtains higher scores than that trained on the corpus of each individual domain, DALI still imDomain Base DALI Word Subword Medical 28.94 30.06 44.1% 69.1% IT 18.27 23.88 45.1% 77.4% Subtitles 22.59 22.71 35.9% 62.5% Law 24.26 24.55 59.0% 73.7% Koran 11.64 12.19 83.1% 74.5% Table 7: BLEU scores of LSTM based NMT models when trained on WMT14 De-En data (Base), and adapted to one domain (DALI). The last two columns show the percentage of source word/subword overlap between the training data on the WMT domain and other five domains. proves the adaptation performance over the unadapted NMT system by up to 5 BLEU score. 3.9 Qualitative Examples Finally, we show outputs generated by various data augmentation methods. Starting with the unadapted output, we can see that the output is totally unrelated with the reference. By adding the copied corpus, words that have the same spelling in the source and target languages e.g. “abilify” are correctly translated. With back translation, the output is more fluent; though keywords like “abilify” are not well translated, in-domain words that are highly related with the context like “medicine” are correctly translated. DALI manages to translate in-domain words like “abilify” and “substance”, which are added by DALI using the induced lexicon. By combining both BT and DALI, the output becomes fluent and also contains correctly translated in-domain keywords of the sentence. 4 Related Work There is much work on supervised domain adaptation setting where we have large out-of-domain parallel data and much smaller in-domain parallel data. Luong and Manning (2015) propose training a model on an out-of-domain corpus and do finetuning with small sized in-domain parallel data 2997 to mitigate the domain shift problem. Instead of naively mixing out-of-domain and in-domain data, Britz et al. (2017) circumvent the domain shift problem by jointly learning domain discrimination and the translation. Joty et al. (2015) and Wang et al. (2017) address the domain adaptation problem by assigning higher weight to out-ofdomain parallel sentences that are close to the indomain corpus. Our proposed method focuses on solving the adaptation problem with no in-domain parallel sentences, a strict unsupervised setting. Prior work on using monolingual data to do data augmentation could be easily adapted to the domain adaptation setting. Early studies on databased methods such as self-enhancing (Schwenk, 2008; Lambert et al., 2011) translate monolingual source sentences by a statistical machine translation system, and continue training the system on the synthetic parallel data. Recent databased methods such as back-translation (Sennrich et al., 2016a) and copy-based methods (Currey et al., 2017) mainly focus on improving fluency of the output sentences and translation of identical words, while our method targets OOV word translation. In addition, there have been several attempts to do data augmentation using monolingual source sentences (Zhang and Zong, 2016; ChineaRios et al., 2017). Besides, model-based methods change model architectures to leverage monolingual corpus by introducing an extra learning objective, such as auto-encoder objective (Cheng et al., 2016) and language modeling objective (Ramachandran et al., 2017). Another line of research on using monolingual data is unsupervised machine translation (Artetxe et al., 2018; Lample et al., 2018b,a; Yang et al., 2018). These methods use word-for-word translation as a component, but require a careful design of model architectures, and do not explicitly tackle the domain adaptation problem. Our proposed data-based method does not depend on model architectures, which makes it orthogonal to these model-based methods. Our work shows that apart from strengthening the target-side decoder, direct supervision over the in-domain unseen words is essential for domain adaptation. Similar to this, a variety of methods focus on solving OOV problems in translation. Daum´e III and Jagarlamudi (2011) induce lexicons for unseen words and construct phrase tables for statistical machine translation. However, it is nontrivial to integrate lexicon into NMT models that lack explicit use of phrase tables. With regard to NMT, Arthur et al. (2016) use a lexicon to bias the probability of the NMT system and show promising improvements. Luong and Manning (2015) propose to emit OOV target words by their corresponding source words and do post-translation for those OOV words with a dictionary. Fadaee et al. (2017) propose an effective data augmentation method that generates sentence pairs containing rare words in synthetically created contexts, but this requires parallel training data not available in the fully unsupervised adaptation setting. Arcan and Buitelaar (2017) leverage a domainspecific lexicon to replace unknown words after decoding. Zhao et al. (2018) design a contextual memory module in an NMT system to memorize translations of rare words. Kothur et al. (2018) treats an annotated lexicon as parallel sentences and continues training the NMT system on the lexicon. Though all these works leverage a lexicon to address the problem of OOV words, none specifically target translating in-domain OOV words under a domain adaptation setting. 5 Conclusion In this paper, we propose a data-based, unsupervised adaptation method that focuses on domain adaption by lexicon induction (DALI) for mitigating unknown word problems in NMT. We conduct extensive experiments to show consistent improvements of two popular NMT models through the usage of our proposed method. Further analysis show that our method is effective in fine-tuning a pre-trained NMT model to correctly translate unknown words when switching to new domains. Acknowledgements The authors thank anonymous reviewers for their constructive comments on this paper. This material is based upon work supported by the Defense Advanced Research Projects Agency Information Innovation Office (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under Contract No. HR0011-15-C0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 2998 References Mihael Arcan and Paul Buitelaar. 2017. Translating domain-specific expressions in knowledge bases with neural machine translation. CoRR, abs/1709.02184. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. International Conference on Learning Representations. Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1557–1567, Austin, Texas. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Ondej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303, Belgium, Brussels. Association for Computational Linguistics. Denny Britz, Quoc Le, and Reid Pryzant. 2017. Effective domain mixing for neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 118–126. Association for Computational Linguistics. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1965–1974, Berlin, Germany. Association for Computational Linguistics. Mara Chinea-Rios, ´Alvaro Peris, and Francisco Casacuberta. 2017. Adapting neural machine translation with parallel synthetic data. In Proceedings of the Second Conference on Machine Translation, pages 138–147, Copenhagen, Denmark. Association for Computational Linguistics. Chenhui Chu and Rui Wang. 2018. A survey of domain adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1304–1319, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. International Conference on Learning Representations. Anna Currey, Antonio Valerio Miceli Barone, and Kenneth Heafield. 2017. Copied monolingual data improves low-resource neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 148–156, Copenhagen, Denmark. Association for Computational Linguistics. Hal Daum´e III and Jagadeesh Jagarlamudi. 2011. Domain adaptation for machine translation by mining unseen words. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 407–412. Association for Computational Linguistics. Tobias Domhan and Felix Hieber. 2017. Using targetside monolingual data for neural machine translation through multi-task learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1500–1505, Copenhagen, Denmark. Association for Computational Linguistics. Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 567– 573, Vancouver, Canada. Association for Computational Linguistics. Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. arXiv preprint arXiv:1612.06897. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535. Shafiq Joty, Hassan Sajjad, Nadir Durrani, Kamla AlMannai, Ahmed Abdelali, and Stephan Vogel. 2015. How to avoid unwanted pregnancies: Domain adaptation using neural network models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1259–1270. Association for Computational Linguistics. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL. Catherine Kobus, Josep Crego, and Jean Senellart. 2017. Domain control for neural machine translation. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 372–378. INCOMA Ltd. 2999 Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver. Association for Computational Linguistics. Sachith Sri Ram Kothur, Rebecca Knowles, and Philipp Koehn. 2018. Document-level adaptation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 64–73, Melbourne, Australia. Association for Computational Linguistics. Patrik Lambert, Holger Schwenk, Christophe Servan, and Sadaf Abdul-Rauf. 2011. Investigations on translation model adaptation using monolingual data. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 284–293, Edinburgh, Scotland. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. International Conference on Learning Representations. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spoken language domains. In Proceedings of the International Workshop on Spoken Language Translation. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Prajit Ramachandran, Peter Liu, and Quoc Le. 2017. Unsupervised pretraining for sequence to sequence learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 383–391, Copenhagen, Denmark. Association for Computational Linguistics. Holger Schwenk. 2008. Investigations on large-scale lightly-supervised training for statistical machine translation. In International Workshop on Spoken Language Translation (IWSLT) 2008. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017. Sentence embedding for neural machine translation domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 560–566, Vancouver, Canada. Association for Computational Linguistics. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, Denver, Colorado. Association for Computational Linguistics. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 46– 55, Melbourne, Australia. Association for Computational Linguistics. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1535–1545, Austin, Texas. Association for Computational Linguistics. Yang Zhao, Jiajun Zhang, Zhongjun He, Chengqing Zong, and Hua Wu. 2018. Addressing troublesome words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 391–400, Brussels, Belgium. Association for Computational Linguistics. 3000 A Appendices A.1 Hyper-parameters For the RNN-based model, we use two stacked LSTM layers for both the encoder and the decoder with a hidden size and a embedding size of 512, and use feed-forward attention (Bahdanau et al., 2015). We use a Transformer model building on top of the OpenNMT toolkit (Klein et al., 2017) with six stacked self-attention layers, and a hidden size and a embedding size of 512. The learning rate is varied over the course of training (Vaswani et al., 2017). LSTM XFMR Embedding size 512 512 Hidden size 512 512 # encoder layers 2 6 # decoder layers 2 6 Batch 64 sentences 8096 tokens Learning rate 0.001 Optimizer Adam Adam Beam size 5 5 Max decode length 100 100 Table 8: Configurations of LSTM-based NMT and Transformer (XFMR) NMT, and tuning parameters during training and decoding A.2 Domain Shift To measure the extend of domain shift, we train a 5-gram language model on the target sentences of the training set on one domain, and compute the average perplexity of the target sentences of the training set on the other domain. In Table 9, we can find significant differences of the average perplexity across domains. Domain Medical IT Subtitles Law Koran Medical 1.10 2.13 2.34 1.70 2.15 IT 1.95 1.21 2.06 1.83 2.05 Subtitles 1.98 2.13 1.31 1.84 1.82 Law 1.88 2.15 2.50 1.12 2.16 Koran 2.09 2.23 2.08 1.94 1.11 Table 9: Perplexity of 5-gram language model trained on one domain (columns) and tested on another domain (rows) A.3 Lexicon Overlap Table 10 shows the overlap of the induced lexicons from supervised, unsupervised induction and GIZA++ extraction across five domains. The second and third column show the percentage of unique lexicons induced only by unsupervised induction and supervised induction respectively, while the last column shows the percentage of the lexicons induced by both methods. Corpus Unsupervised Supervised Intersection Medical 5.3% 5.4% 44.7% IT 4.1% 4.1% 45.2% Subtitles 1.0% 1.0% 37.1% Law 4.4% 4.5% 45.7% Koran 2.1% 2.0% 40.6% Table 10: Lexicon overlap between supervised, unsupervised and GIZA++ lexicon. 3001 Domain |In| Medical IT Subtitles Law Koran Medical 125724 0 (0.00) 123670 (0.98) 816762 (6.50) 159930 (1.27) 12697 (0.10) IT 140515 108879 (0.77) 0 (0.00) 818303 (5.82) 167630 (1.19) 12512 (0.09) Subtitles 857527 84959 (0.10) 101291 (0.12) 0 (0.00) 129323 (0.15) 3345 (0.00) Law 189575 96079 (0.51) 118570 (0.63) 797275 (4.21) 0 (0.00) 10899 (0.06) Koran 18292 120129 (6.57) 134735 (7.37) 842580 (46.06) 182182 (9.96) 0 (0.00) Table 11: Out-of-Vocabulary statistics of German Words across five domains. Each row indicates the OOV statistics of the out-of-domain (row) corpus against the in-domain (columns) corpus. The second column shows the vocabulary size of the out-of-domain corpus in each row. The remaining columns (3rd-7th) show the number of domain-specific words in each in-domain corpus with respect to the out-of-domain corpus, and the ratio between the number of out-of-domain corpus and the domain specific words. Domain |In| Medical IT Subtitles Law Koran Medical 68965 0 (0.00) 57206 (0.83) 452166 (6.56) 72867 (1.06) 15669 (0.23) IT 70652 55519 (0.79) 0 (0.00) 448072 (6.34) 75318 (1.07) 14771 (0.21) Subtitles 480092 41039 (0.09) 38632 (0.08) 0 (0.00) 53984 (0.11) 4953 (0.01) Law 92501 49331 (0.53) 53469 (0.58) 441575 (4.77) 0 (0.00) 13399 (0.14) Koran 22450 62184 (2.77) 62973 (2.81) 462595 (20.61) 83450 (3.72) 0 (0.00) Table 12: Out-of-Vocabulary statistics of English Words across five domains. Each row indicates the OOV statistics of the out-of-domain (row) corpus against the in-domain (columns) corpus. The second column shows the vocabulary size of the out-of-domain corpus in each row. The remaining columns (3rd-7th) show the number of domain-specific words in each in-domain corpus with respect to the out-of-domain corpus, and the ratio between the number of out-of-domain corpus and the domain specific words.
2019
286
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3002–3012 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3002 Reference Network for Neural Machine Translation Han Fu†‡ Chenghao Liu§ Jianling Sun†‡∗ †Zhejiang University, Hangzhou, China ‡Alibaba-Zhejiang University Joint Institute of Frontier Technologies, China §Singapore Management University, Singapore {11821003, sunjl}@zju.edu.cn [email protected] Abstract Neural Machine Translation (NMT) has achieved notable success in recent years. Such a framework usually generates translations in isolation. In contrast, human translators often refer to reference data, either rephrasing the intricate sentence fragments with common terms in source language, or just accessing to the golden translation directly. In this paper, we propose a Reference Network to incorporate referring process into translation decoding of NMT. To construct a reference book, an intuitive way is to store the detailed translation history with extra memory, which is computationally expensive. Instead, we employ Local Coordinates Coding (LCC) to obtain global context vectors containing monolingual and bilingual contextual information for NMT decoding. Experimental results on Chinese-English and English-German tasks demonstrate that our proposed model is effective in improving the translation quality with lightweight computation cost. 1 Introduction Neural Machine Translation (NMT) has enjoyed impressive success in most large-scale translation tasks (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014). Typical NMT model to date is a single end-to-end trained deep neural network that encodes the source sentence into a fixed-length vector and generates the words in the target sentence sequentially. The alignment relationship between source and target sentence is learned by the attention mechanism (Bahdanau et al., 2015; Luong et al., 2015). Though the framework has achieved significant success, one critical concern is that NMT generates translations in isolation, which leads to translation inconsistency and ambiguity arising from ∗Corresponding author: Jianling Sun. a single source sentence (Tu et al., 2018). Recently, there have been few attempts to model the semantic information across sentences. The basic ideas are to store a handful of previous source or target sentences with context vectors (Jean et al., 2017; Wang et al., 2017a) or memory components (Maruf and Haffari, 2018; Tu et al., 2018). However, these methods have several limitations. First, the very short view of the previous sentences (usually one or two sentence(s)) is not sufficient enough to catch long term dependencies across paragraphs and storing detailed translation history is computationally expensive. Second, in the realworld scenarios, input data of MT application is often isolated sentences, such as Google Translate, where no cross-sentence contexts are provided. Moreover, translations generated by such document-level NMT models are not stable, effected by the sentences surrounding the current one to translate. To address these limitations, we model the semantic information across sentences by mimicking the human translation process. In real scenarios, there will always be sentences or fragments that the translator can understand the meaning but cannot write down the translations directly. The obstacle could be unfamiliar collocation, descriptions in specific language habits and slang. The usual solutions for human are: (1) paraphrasing the sentence in another way, with simpler and more colloquial terms in the source language, and (2) directly referring to the standard translations of the intricate sentence fragments. For example in Table 1, the Chinese word ”zaiyu” is not a common expression. A reference can either provide simple Chinese terms such as ”daizhe rongyu” or directly offer the corresponding English translation ”with honor”. Therefore, if a good quality reference book which covers various translation scenes is provided, it can definitely improve the 3003 source canjia dongaohui de faguo yundongyuan zaiyu fanhui bali. translation French athletes participating in winter olympics returned to paris with honors. Table 1: An example of sentence fragment that is hard to translate. performance of human translators. To be specific, the motivation of this work can be summarized as two aspects corresponding to the two kinds of human reference processes. First, we aim to provide the machine translator with a reference during decoding, which contains all possible source sentence fragments that are semantically similar to the current one. If the system finds it hard to translate the source fragment, it can turn to translate the fragments in the reference. Second, we intend to offer the oracle translations of the current sentence fragments to translate. In this paper, we propose a novel model namely Reference Network that incorporates the referring process into translation decoding of NMT. Instead of storing the detailed sentences or translation history, we propose to generate representations containing global monolingual and bilingual contextual information with Local Coordinate Coding (LCC) (Yu et al., 2009). Specifically, for solution (1), the hidden states of NMT encoder are coded by a linear combination of a set of anchor points in an unsupervised manner. The anchors are capable to cover the entire latent space of the source language seamlessly. For solution (2), we employ local codings to approximate the mapping from source and target contexts to the current target word with a supervised regression function. The local coding is then fed to the decoder to modify the update of the decoder hidden state. In this way, the translation decoding can be improved by offering the representation of a common paraphrase (Figure 1) or golden target translation (Figure 2). We conduct experiments on NIST ChineseEnglish (Zh-En) and WMT German-Chinese (EnDe) translation tasks. The experimental results indicate that the proposed method can effectively exploit the global information and improve the translation quality. The two proposed models significantly outperform the strong NMT baselines by adding only 9.3% and 19.6% parameters respectively. 2 Background 2.1 Neural Machine Translation Our model is built on the RNN-based NMT (Bahdanau et al., 2015). However, since recurrent architecture is not necessary for our approach, the idea can also be applied to ConvS2S (Gehring et al., 2017) and Transformer (Vaswani et al., 2017). We leave it for future work. Formally, let x = (x1, ..., xm) be a given source sentence and y = (y1, ..., yT ) be the corresponding target sentence. NMT generates the target words sequentially by maximizing the probability of translation conditioned on the source sentence: ˆy = arg max y T X t=1 log p(yt|x, y<t). (1) At each timestep, the generation probability is computed as p(yt|x, y<t) = softmax(g(e(yt−1), st, ct)), (2) where g is a transformation function that outputs a vocabulary-sized vector, e(yt−1) is the embedding of previous target word yt−1, ct is the source context vector obtained by attention mechanism, and st is the t-th hidden state of NMT decoder, computed as: st = fd(e(yt−1), st−1, ct), (3) where fd is a nonlinear activation. The source context ct is typically a weighted sum of encoder hidden states as: ct = m X i=1 αti · hi, (4) where attention score αti is the alignment vector of the i-th source word xi and the t-th target word yt: αti = softmax(v⊤ α tanh(Wαst−1+Uαhi)). (5) where Wα, Uα and vα are trainable matrices or vectors. hi is the annotation of xi computed by the NMT encoder. The encoder, generally implemented as a bi-directional RNN, encodes the input sentence into a sequence of source hidden states h = (h1, ..., hm) where hi is obtained by concatenating the forward hidden state −→ hi and backward one ←− hi at timestep i. 3004 𝐡𝐡1 𝑁𝑁 𝐡𝐡2 𝑁𝑁 𝐡𝐡|𝐱𝐱𝑁𝑁| 𝑁𝑁 … 𝑥𝑥2 𝑁𝑁 𝑥𝑥|𝐱𝐱𝑁𝑁| 𝑁𝑁 𝑥𝑥1 𝑁𝑁 𝐱𝐱𝑁𝑁: average pooling 𝐡𝐡1 2 𝐡𝐡2 2 𝐡𝐡|𝐱𝐱2| 2 … 𝑥𝑥2 2 𝑥𝑥|𝐱𝐱2| 2 𝑥𝑥1 2 𝐱𝐱2: average pooling … 𝐡𝐡𝑀𝑀 1 𝐡𝐡𝑀𝑀 2 𝐡𝐡𝑀𝑀 𝑁𝑁 local coordinate coding 𝐡𝐡1 1 𝐡𝐡2 1 𝐡𝐡|𝐱𝐱1| 1 … 𝑥𝑥2 1 𝑥𝑥|𝐱𝐱1| 1 𝑥𝑥1 1 𝐱𝐱1: average pooling 𝐯𝐯2 𝐯𝐯1 𝐯𝐯|𝒞𝒞| … LCC anchors 𝐡𝐡1 𝐡𝐡2 𝐡𝐡𝑚𝑚 … 𝑥𝑥1 𝑥𝑥2 𝑥𝑥𝑚𝑚 attention 𝐬𝐬1 𝐬𝐬𝑡𝑡 𝐬𝐬𝑡𝑡−1 … 𝑦𝑦1 𝑦𝑦𝑡𝑡−1 𝑦𝑦𝑡𝑡 𝐜𝐜1 𝐜𝐜𝑡𝑡 𝐜𝐜𝑡𝑡−1 … attention 𝐜𝐜𝑡𝑡 𝐺𝐺 Figure 1: Framework of NMT with M-RefNet. xi represents the i-th source sentence in the training corpus and |xi| is the length of the sentence. The global context vector cG t can be regarded as a paraphrase of the current source context ct. According to the above formulations, conventional NMT models translate sentences independently. However, human translators usually tend to seek for reference materials when in trouble. Motivated by such common human behaviors, we propose Reference Network to provide global information as a reference book in two ways. First, the model utilizes all source hidden states to paraphrase current source sentence. Second, the model directly provides the target word ˜yt according to the rest translation samples in the training corpus. Since it is impossible to store all information directly, we leverage local coordinate coding (LCC) to compress the semantics into a latent manifold. 2.2 Local Coordinate Coding With the assumption that data usually lies on the lower dimensional manifold of the input space, the manifold approximation of high dimensional input x can be defined as a linear combination of surrounding anchor points as: x ≈γ(x) = X v γv(x)v, (6) where v is an anchor point and γv is the weight corresponding to v such that X v γv(x) = 1. (7) According to the definitions, it is proved in (Yu et al., 2009) that if the anchor points are localized enough, any (lα, lβ)-Lipschitz smooth function f(x) defined on a lower dimensional manifold M can be globally approximated by a linear combination of the function values of a set of the anchors C as: f(x) ≈ X v∈C γv(x)f(v), (8) with the upper bound of the approximation error: lα∥x − X v∈C γv(x)v∥ + X v∈C lβ|γv(x)|∥v − X v∈C γv(x)v∥2. (9) 3 Reference Network In this section, we present our proposed Reference Network (RefNet). 3.1 Overview We propose two models which explore the global information from the training data in different manners as illustrated by Figure 1 and Figure 2. The monolingual reference network (MRefNet) provides a global source context vector to paraphrase the current context ct based on all other source sentences. To be specific, we train several unsupervised anchors as the bases of the semantic space of source contexts and each source sentence in the training corpus can be represented by a weighted sum of the anchors. The bilingual reference network (B-RefNet) generates a referable target embedding according to all sentence pairs in the training corpus to guide output sequence generation. Concretely, we formulate the translation process as a mapping from source and target contexts (ct and st−1) to the current target word embedding e(yt). B-RefNet 3005 𝐡𝐡1 𝐡𝐡2 𝐡𝐡𝑚𝑚 … 𝑥𝑥1 𝑥𝑥2 𝑥𝑥𝑚𝑚 attention 𝐬𝐬1 𝐬𝐬𝑡𝑡 𝐬𝐬𝑡𝑡−1 … 𝑦𝑦1 𝑦𝑦𝑡𝑡−1 𝑦𝑦𝑡𝑡 𝐜𝐜1 𝐜𝐜𝑡𝑡 𝐜𝐜𝑡𝑡−1 … 𝐜𝐜𝑡𝑡 𝐬𝐬𝑡𝑡−1 𝑦𝑦𝑡𝑡−1 𝐪𝐪𝑡𝑡 𝛾𝛾𝐯𝐯1(𝐖𝐖𝐯𝐯1𝑔𝑔𝐪𝐪𝑡𝑡+ 𝐛𝐛𝐯𝐯1) 𝛾𝛾𝐯𝐯2(𝐖𝐖𝐯𝐯2𝑔𝑔(𝐪𝐪𝑡𝑡) + 𝐛𝐛𝐯𝐯2) 𝛾𝛾𝐯𝐯𝒞𝒞(𝐖𝐖𝐯𝐯𝒞𝒞𝑔𝑔(𝐪𝐪𝑡𝑡) +𝐛𝐛𝐯𝐯|𝒞𝒞|) … 𝑓𝑓𝑠𝑠(𝐪𝐪𝑡𝑡) local coordinate coding Figure 2: Framework of NMT with B-RefNet. The output fs(qt) of RefNet can be regarded as an approximation of current target word embedding e(yt). learns this mapping with a supervised regression function derived from LCC. It should be noted that the corpus from which the reference vectors (cG t or fs(qt)) are learned can be any monolingual or bilingual data, and the translations generated are relatively effected by the quality of the corpus. In this work, we constrain it as the training corpus for convenience and a fair comparison with the related work. 3.2 Monolingual Referent Network In this section, we seek to improve NMT by rephrasing the source sentence. Instead of storing all source contexts, we regenerate the source contexts from a learned manifold with a combination of a fixed number of anchor points. Formally, given any source sequence x with length m in the training samples, let h = (h1, ..., hm) denotes the hidden states generated by the NMT encoder. We firstly obtain the representation of the source sentence hM via a mean-pooling operation. According to the definition of LCC, it can be assumed that hM ≈γ(hM) where γ(hM) is the local coordinate coding of hM, computed as: γ(hM) = |C| X j=1 γj(hM)vj. (10) Here, vj is the j-th anchor point. The coefficient γj(hM) is used to measure the weight of anchor point vj corresponding to γ(hM). In conventional manifold learning methods, γj(hM) is generally computed with distance measure. And to achieve localization, the coefficients corresponding to anchor points out of the neighbors of hM are set to zero. However, it is hard to train in deep neural network using stochastic gradient methods. Inspired by the attention mechanism (Bahdanau et al., 2015), we propose to employ an attention layer to obtain the weights: γj(hM) = exp(s(hM, vj)) P|C| j=1 exp(s(hM, vj)) , (11) where s(·) is a score function. Here, we propose a tri-nonlinear score function which has been proven especially effective in the experiments: s(hM, vj) = v⊤ s tanh(Wsvj + UshM + Vs(vj ◦hM)), (12) where Ws, Us, Vs and vs are trainable parameters. ◦is the element-wise multiplication, and dimension of any anchor point should be the same to hM. To find the optimal anchor point, localization measure (Yu et al., 2009) is employed as the optimization object: min γ,C lα ∥hM −γ(hM)∥+ lβ |C| X j=1 |γj(hM)| ∥vj −γ(hM)∥2 . (13) Since any source sentence presentation hM can be represented by the linear combination of the anchors, the trained anchor points can be regarded as the bases of the latent space of all source annotations, containing the global contextual information. Therefore, during translation decoding of NMT, we can drop the coefficient γ and rephrase the source sentence only with the anchor points. 3006 Specifically, we apply an attention mechanism between current local contextual information and each anchor point vj to get the global context as: cG t = |C| X j=1 αG tjvj, (14) where αG tj is the attention score between current local contexts and the global context, computed as: αG tj = softmax(v⊤ α tanh(Wαst−1 +Uαct + Vαvj)). (15) Once the global context cG t is obtained, we feed it to decoder states: st = fd(e(yt−1), st−1, ct, cG t ), (16) where ct encodes the local contextual information and cG t contains the global monolingual information from all source sentences in the training corpus. When the model has trouble to translate some words or sentence fragments, it can refer to cG t to gain the richer source contextual information. 3.3 Bilingual Reference Network The bilingual model is proposed to improve NMT by providing a golden translations according to rest samples in the training corpora. To be specific, once source context ct and target context st−1 are obtained, we hope to provide a referable prediction e( ˜yt) of the current target word embedding e(yt) according to other sentence pairs in the training data for the decoder. The functionality of the NMT decoder during translation (Eq.2 and Eq.3) is totally a function that maps the source context ct, target context st−1 and last target word yt−1 to current target yt. NMT takes it as a classification problem, using tanh or other gated RNN unit to implement this function. In this work, we propose a much stronger model in information expression, that regrades the problem as regression: qt = [e(yt−1)⊤, s⊤ t−1, c⊤ t ]⊤, (17) e(yt) ≈fs(qt) = W(qt)g(qt) + b(qt), (18) where g is a transformation function that transforms qt to a anchor-size vector, W and b are the weight matrix and bias vector of the regression function. The weight and bias are allowed to vary according to the input qt, which makes the function capable of mapping each qt to the corresponding e(yt) precisely. However, it is impossible to store the weight and bias for every qt computed within the training data. Therefore, we approximate the weight and bias function in Eq.18 using local coordinate coding as: fs(qt) = |C| X j=1 γj(qt) Wvjg(qt) + bvj  , (19) where vj ∈C is an anchor point, Wvj and bvj are trainable parameters corresponding to vj, and γj(qt) is the weight function, computed as: γj(qt) = exp(s(qt, vj)) P|C| j=1 exp(s(qt, vj)) . (20) Similar to M-RefNet, the score is computed by the tri-nonlinear function as: s(qt, vj) = v⊤ b tanh(Wbvj+Ubqt+Vb(vj◦qt)). (21) Here, fs(qt) can be regarded as an approximation of e(yt) based on all the sentence pairs in the training data. Therefore, we feed the function value to the decoder state to guide sentence generation: st = fd(e(yt−1), st−1, ct, fs(qt)). (22) The optimal weight matrices and anchor points are obtained by minimizing the hinge loss for each sentence pair (x, y) as: LM = |y| X t=1 ∥e(yt) −fs(qt)∥2+λM |C| X j=1 ∥W(vj)∥2 . (23) 3.4 Training and Inference Stage-wise training strategies have been proven to be efficient when system is relative complicated by plenty of recent work (Maruf and Haffari, 2018; Tu et al., 2018). In this work, we first pre-train a standard NMT on a set of training examples {[xn, yn]}N n=1 as initialization for training the added parameters in our proposed models. Let θ = {θE, θD} denote the parameters of the standard NMT, where θE and θD are parameters of the standard encoder and decoder (including attention model) respectively. For M-RefNet, the stage following NMT training is to obtain the weight vectors γ and anchor points C related to all training 3007 System MT05 MT06 MT08 Avg Dl4mt 32.88 32.30 25.97 30.38 NMT 35.76 34.82 27.86 32.81 CS-NMT 36.63 36.41 29.47 34.17 LS-NMT 36.46 36.99 29.73 34.39 CC-NMT 36.65 37.08 29.71 34.48 DC-NMT 36.82 36.73 29.83 34.46 This work M-RefNet 37.31 37.72 30.41 35.15 B-RefNet 37.71 37.99 30.80 35.50 Table 2: BLEU scores of different models on Zh-En. sentence representations hM by minimizing localization measure (Eq.13). Then we fix the trained anchor points and encoder, and only fine-tune the decoder θD and the added parameters θM related to the monolingual reference network (Eq.15 and Eq.16): max θD,θM N X n=1 [log P(yn|xn; θ, θM, γ)] . (24) To train B-RefNet efficiently, we fix the trained parameters of the standard NMT and only update the added parameters θB including all weight matrices and biases related to local coordinate coding (Eq.19 and Eq.21). The training object is: max θB N X n=1 [log P(yn|xn; θ, θB) −λLM] , (25) where λ is a hyper-parameter that balances the preference between likelihood and hinge loss. During inference, all parameters related to LCC are fixed. Therefore, the work can be regarded as a static approach, compared with the conventional document-level NMT. That means, the final translation is only effected by the reference corpus but not by the sentences surrounding the current one to translate. Naturally, there leaves a question that how it influences the quality of translations when various reference corpus is chosen. We leave it in future work and only use the training corpus in this paper. 4 Experiments We evaluate the reference network models on two translation tasks, NIST Chinese-English translation (Zh-En) and WMT English-German translation (En-De). 4.1 Settings Datasets For Zh-En, we choose 1.25M sentence pairs from LDC dataset1 with 34.5 English words and 27.9M Chinese words. NIST MT02 is chosen as the development set, and NIST MT05/06/08 as test sets. Sentences with more than 50 words are filtered and vocabulary size is limited as 30k. We use case-insensitive BLEU score to evaluate Zh-En translation performance. For En-De, the training set is from (Luong et al., 2015) which contains 4.5M bilingual pairs with 116M English words and 100M German words. BPE (Sennrich et al., 2016) is employed to split the sentence pairs into subwords and we limit the vocabulary as 40k sub-words units. Newstest2012/2013 are chosen for developing and Newsetest2014 for test. casesensitive BLEU2 is employed as the evaluation metric. Models We evaluate our RefNet with different structures on Zh-En and En-De. For Zh-En we choose the typical attention-based recurrent NMT model (Bahdanau et al., 2015) as initialization, which consists of a bi-directional RNN-based encoder and a one layer RNN decoder. The dimensions of embedding and hidden state are 620 and 1000 respectively. For En-De, deep linear associative unit model (DeepLAU) (Wang et al., 2017b) is chosen as the base model. Both the encoder and decoder consist of 4-layer LAUs. All embedding and hidden states are 512-dimensional vectors. Moreover, we use layer normalization (Ba et al., 2016) on all layers. For both architectures, the number of anchor points is 100 for M-RefNet and 30 for B-RefNet. The anchor dimension of B-RefNet is set to 100. The hyper-parameter λ in Eq.25 is set to 1. The norm of gradient is clipped to be within [−1, 1] and dropout is applied to embedding and output layer with rate 0.2 and 0.3 respectively. When generating translations, we utilize beam search with beam size 10 on Zh-En and 8 on En-De. 4.2 Results on Chinese-English Translation The standard attention-based NMT model is chosen as the baseline and initialization of our models. Moreover, we also list the results of the open1The corpus contains LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06 2https://github.com/moses-smt/mosesdecoder /blob/master/scripts/generic/multi-bleu.perl 3008 source Dl4mt and re-implementations of the following related work for comparison: • Cross-sentence context-aware NMT (CSNMT) (Wang et al., 2017a): A cross-sentence NMT model that incorporates the historical representation of three previous sentences into decoder. • LC-NMT (Jean et al., 2017): A NMT model that concurrently encodes the previous and current source sentences as context, added to decoder states. • NMT augmented with a continuous cache (CC-NMT) (Tu et al., 2018): A NMT model armed with a cache3 which stores the recent translation history. • Document Context NMT with Memory Networks (DC-NMT) (Maruf and Haffari, 2018): A document-level NMT model that stores all source and target sentence representations of a document to guide translation generating4. All the re-implemented systems share the same settings with ours for fair comparisons. 4.2.1 Main Results Results on Zh-En are shown in Table 2. The baseline NMT significantly outperforms the opensource Dl4mt by 2.43 BLEU points, indicating the baseline is strong. Our proposed M-RefNet and B-RefNet improve the baseline NMT by 2.34 and 2.69 BLEU respectively and up to 2.90 and 3.17 BLEU on NIST MT06, which confirms the effectiveness of our proposed reference networks. Overall, B-RefNet achieves the best performance over all test sets Compared with the related work which incorporate document-level information NMT, our proposed models still have a significant advantage. Compared to the best performance achieved by the related work (CC-NMT), M-RefNet and BRefNet outperform it over all test sets and gain improvements of 0.77 BLEU and 1.02 BLEU in average. The possible reason is that all the related work only leverage a small range of the documentlevel information, limited by model complexity 3Cache size is set to 25. 4LDC training corpora contains nature boundaries. However document range is not clear for NIST test data. We use clustering and regard each class as a document. Dimension of document context is set to 1024. # System #Para Speed Train Test 0 NMT 71.1M 3590.4 114.21 1 CS-NMT 95.7M 747.5 97.10 2 LC-NMT 96.8M 1983.5 70.11 3 CC-NMT 75.1M 2844.7 113.09 4 DC-NMT 86.2M 2093.6 54.07 5 M-RefNet 77.7M 2563.98 113.26 6 B-RefNet 85.1M 2191.4 104.07 Table 3: Statistics of parameters, training speed (sentences/minute) and testing speed (words/second). 10 20 30 40 50 60 70 80 [0,10) [10,20) [20,30) [30,40) [40,50) >50 Translation Length Length of Source Sentence NMT M-RefNet B-RefNet Reference 10 15 20 25 30 35 40 [0,10) [10,20) [20,30) [30,40) [40,50) >50 BLEU Length of Source Sentence NMT M-RefNet B-RefNet (a) Translation quality (b) Averaged length Figure 3: Translation quality and averaged length of the translations as source sentences become longer. and time consuming. In contrast, our models are capable to express all information with more abstract representations. According to the results, though the information is deeply compressed in our models, it is still effective. 4.2.2 Analysis Parameters and Speed The number of parameters and speed of each model are listed in Table 3. It can be seen that M-RefNet only introduces 6.6M additional parameters while B-RefNet introduces relative larger number of parameters (14M). Considering training process, both M-RefNet and BRefNet are quite efficient and the training speeds are little slower than CC-NMT, for the added amount of parameters is quite small compared to the baseline NMT and related systems. In terms of decoding, both proposed models do not slow down the translation speed obviously and M-RefNet achieves the fastest speed over all systems except the baseline NMT. The reason is that our models do not incorporate additional previous sentences or interact with extra memory as the relevant document-level systems. Furthermore, though the training speed and number of parameters of B-RefNet and DC-NMT are similar, BRefNet gains a twice faster translation speed, because that DC-NMT needs a two-pass translation 3009 Source agenting zantong yige zhongguo de lichang . Reference argentina supports the ” one china ” policy. NMT argentina agrees with china ’ s stand on the one china . M-RefNet the argentine government supports the one china position . B-RefNet argentina supports the one china policy . Source yindu bianfang minbing 2 yue 17 ri , jiaqiang le dui niboer bianjie de xunluo . Reference on february 17 , the indian border security force stepped up patrols along the border with nepal . NMT on 17 february , indian border defense militia [ UNK UNK UNK UNK UNK ] . M-RefNet the indian border defense militia , on 17 february , strengthened the patrol of nepal ’ s border . B-RefNet the indian border defense militia has stepped up patrols on the nepalese border on 17 february . Table 4: Comparison on translation examples. The translation errors are highlighted with italic and the correct ones are highlighted with bold type. process to fill the memory cells. Length Analysis We follow (Luong et al., 2015) to group sentences with similar lengths and compute the BLEU score of each group, as shown in Figure 3. The reason for the falling of BLEU in the last group (>50) is that sentences longer than 50 are removed during training. From this figure, we can see that our proposed models outperform the baseline NMT in all ranges of length. Moreover, translations generated by M-RefNet and B-RefNet have more similar lengths to the references compared with the baseline NMT. Case Study Table 4 shows the translation examples on Zh-En. In the first case, the Chinese word ”lichang” (standpoint, position, or policy) is incorrectly interpreted as ”stand on” by NMT. Both MRefNet and B-RefNet generate legible translations while translation from B-RefNet is more precise. This is because the word pair (”lichang”, ”policy”) appear somewhere in the training data and is leveraged by the systems according to the contexts. This phenomenon is similar in the second case. Translation given by NMT is not readable. In contrast, M-RefNet generates the core verb ”strengthened” and B-RefNet provides a more accurate collocation ”stepped up patrols”. 4.3 Results on English-German Translation On this task, DeepLAU (Wang et al., 2017b) is chosen as the baseline and also used as the pretrained model. We list the translation performance of our models and some existing NMT systems in Table 5. All the systems except for Robust NMT (Cheng et al., 2018) have a deep architecture with no less than 4 layers while Robust NMT introduces a additional discriminator for adversarial training. From the table, we can observe that our strong baseline DeepLAU is comparable to Google’s neural machine translation (GNMT) (Wu et al., 2016). M-RefNet outperforms the baseline by 1.29 BLEU points and B-RefNet achieves slightly better performance with a 1.79 BLEU improvement, which is consistent to the results on Zh-En. Compared with the SOTA deep NMT systems, both M-RefNet and B-RefNet outperform GNMT and even obtain comparable performance with ConvS2S (Gehring et al., 2017) and Transformer (Vaswani et al., 2017) which have much deeper architectures with relative much more parameters. Since the reference networks do not rely on the recurrent structure, one interesting future direction is to apply our methods to such complicated models to bring further improvements. 5 Related Work Document-level Neural Machine Translation There are few works that consider the documentlevel contextual information to improve typical NMT. Jean et al. (2017) propose to use a additional encoder to generate the latent representation of previous sentence as extra context for decoder and attention mechanism is also applied between the decoder state and previous context to get access to word-level information of the previous sentence. Contemporaneously, Wang et al. (2017a) extend NMT by adding two encoders to encode the previous sentences in word-level and 3010 System Architecture BLEU 0 GNMT 8-layer LSTM encoder and decoder 24.60 1 Robust NMT 2-layer GRU encoder and decoder + adversarial training 25.26 2 ConvS2S 15-layer CNN encoder and decoder 25.16 3 Transformer (big) 6-layer encoder and decoder + 16-head self-attention 28.40 This work 4 DeepLAU 24.37 5 M-RefNet 4-layer LAU encoder and decoder 25.66 6 B-RefNet 26.16 Table 5: Translation quality on En-De. sentence-level respectively. The last hidden state of encoders are considered as the summarization of a previous sentence and the group. Bawden et al. (2018) employ multiple encoder s to summarize the antecedent and propose to combine the contexts with a gated function. However, these incorporated extra encoders bring large amount of parameters and slow down the translation speed. Tu et al. (2018) propose to modify the NMT with light-weight key-value memory to store the translation history. However, due to the limitation of the memory size, the very short view on the previous (25 timesteps) is not sufficient to model the document-level contextual information. Additionally, Maruf and Haffari (2018) propose to capture the global source and target context of a entire document with memory network (Graves et al., 2014; Wang et al., 2016). Nevertheless, since the number of sentence pairs in a document could be enormous, storing all sentence with memory components could be very time and space consuming. More recently, Miculicich et al. (2018) and Zhang et al. (2018) propose to improve Transformer by encoding previous sentences with extra encoders. The reference book in this work can be regarded as a special kind of document context. However, there are two major differences between our approach and the above work. First, we encode the entire corpus into a handful of anchor points which is much more light-weight but concentrated to capture the global contextual information . Second, the global contexts in this work is static. That means, given a sentence to translate, the final translation result only depends on the reference corpus, but not the sentences surrounding the current one. Local Coding There are a number of works on manifold learning (Roweis and Saul, 2000; Van Gemert et al., 2008; Yu et al., 2009; Ladicky and Torr, 2011). The manifold learning methods approximate any point on the latent manifold with a linear combination of a set of localized anchor points relying on the assumption that high dimensional input usually lies on the lower dimensional manifold. Agustsson et al. (2017) utilize local coding into deep neural networks on age prediction from images and Cao et al. (2018) exploit LCC for GAN (Goodfellow et al., 2014) to capture the local information of data. All these works focus on application of Computer Vision while we apply LCC in a Nature Language Processing task. To our knowledge, this is the first attempt to incorporate local coding into NMT modeling. 6 Conclusion and Future Work In this work, we propose two models to improve the translation quality of NMT inspired by the common human behaviors, paraphrasing and consulting. The monolingual model simulates the paraphrasing process by utilizing the global source information while the bilingual model provides a referable target word based on other sentence pairs in the training corpus. We conduct experiments on Chinese-English and English-German tasks, and the experimental results manifest the effectiveness and efficiency of our methods. In the future, we would like to investigate the feasibility of our methods on non-recurrent NMT models such as Transformer (Vaswani et al., 2017). Moreover, we are also interested in incorporating discourse-level relations into our models. Acknowledgments We would like to thank the reviewers for their valuable comments and suggestions. 3011 References Eirikur Agustsson, Radu Timofte, and Luc Van Gool. 2017. Anchored regression networks applied to age estimation and super resolution. In Proceedings of ICCV. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In Proceedings of NAACL. Jiezhang Cao, Yong Guo, Qingyao Wu, Chunhua Shen, and Mingkui Tan. 2018. Adversarial learning with local coordinate coding. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756– 1766. Association for Computational Linguistics. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of ICML. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? arXiv preprint arXiv:1704.05135. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of EMNLP. Lubor Ladicky and Philip Torr. 2011. Locally linear support vector machines. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 985–992. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In Proceedings of ACL. Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947–2954. Sam T Roweis and Lawrence K Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association of Computational Linguistics, 6:407– 420. Jan C Van Gemert, Jan-Mark Geusebroek, Cor J Veenman, and Arnold WM Smeulders. 2008. Kernel codebooks for scene categorization. In Proceedings of ECCV. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017a. Exploiting cross-sentence context for neural machine translation. In Proceedings of EMNLP. Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Memory-enhanced decoder for neural machine translation. In Proceedings of EMNLP. Mingxuan Wang, Zhengdong Lu, Jie Zhou, and Qun Liu. 2017b. Deep neural machine translation with linear associative unit. In Proceedings of ACL. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. 3012 Kai Yu, Tong Zhang, and Yihong Gong. 2009. Nonlinear learning using local coordinate coding. In Proceedings of NIPS. Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 533–542.
2019
287
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3013–3024 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3013 Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation Chenze Shao123, Yang Feng12⋆, Jinchao Zhang3, Fandong Meng3, Xilin Chen12 and Jie Zhou3 1 University of Chinese Academy of Sciences 2 Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) 3 Pattern Recognition Center, WeChat AI, Tencent Inc, China {shaochenze18z, fengyang, xlchen}@ict.ac.cn {dayerzhang, fandongmeng, withtomzhou}@tencent.com Abstract Non-Autoregressive Transformer (NAT) aims to accelerate the Transformer model through discarding the autoregressive mechanism and generating target words independently, which fails to exploit the target sequential information. Over-translation and under-translation errors often occur for the above reason, especially in the long sentence translation scenario. In this paper, we propose two approaches to retrieve the target sequential information for NAT to enhance its translation ability while preserving the fast-decoding property. Firstly, we propose a sequence-level training method based on a novel reinforcement algorithm for NAT (Reinforce-NAT) to reduce the variance and stabilize the training procedure. Secondly, we propose an innovative Transformer decoder named FS-decoder to fuse the target sequential information into the top layer of the decoder. Experimental results on three translation tasks show that the Reinforce-NAT surpasses the baseline NAT system by a significant margin on BLEU without decelerating the decoding speed and the FS-decoder achieves comparable translation performance to the autoregressive Transformer with considerable speedup. 1 Introduction Neural machine translation (NMT) models (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2014) solve the machine translation problem with the Encoder-Decoder framework and achieve impressive performance on translation quality. Recently, the Transformer model (Vaswani et al., 2017) further enhances the translation performance on multiple language pairs, while suffering from the slow decoding procedure, which reJoint work with Pattern Recognition Center, WeChat AI, Tencent Inc, China. ⋆Corresponding Author Src und noch tragischer ist , dass es Oxford war · · · Ref even more tragic is that it was Oxford · · · NAT and more more more more that it was Oxford · · · AR and , more tragic , Oxford was · · · Table 1: A fragment of a long sentence translation. AR stands for the translation of the autoregressive Transformer. The output of the NAT model contains repeated translations of word ‘more’ and misses the word ‘tragic’. stricts its application scenarios. The slow decoding problem of the Transformer model is caused by its autoregressive nature, which means that the target sentence is generated word by word according to the source sentence representations and the target translation history. Non-autoregressive Transformer model (Gu et al., 2017a) is proposed to accelerate the decoding process, which can simultaneously generate target words by discarding the autoregressive mechanism. Since the generation of target words is independent, NAT models utilize alternative information such as encoder inputs (Gu et al., 2017a), translation results from other systems (Lee et al., 2018; Guo et al., 2018) and latent variables (Kaiser et al., 2018) as decoder inputs. Without considering the target translation history, NAT models are weak to exploit the target words collocation knowledge and tend to generate repeated target words at adjacent time steps (Wang et al., 2019). Over-translation and undertranslation problems are aggravated and often occur due to the above reasons. Table 1 shows an inferior translation example generated by a NAT model. Compared to the autoregressive Transformer, NAT models achieve significant speedup while suffering from a large gap in translation quality due to the lack of target sequential information. 3014 In this paper, we present two approaches to retrieve the target sequential information for NAT models to enhance their translation ability and meanwhile preserve the fast-decoding property. Firstly, we propose a sequence-level training method based on a novel reinforcement algorithm for NAT (Reinforce-NAT) to reduce the variance and stabilize the training procedure. We leverage the sequence-level objectives (e.g., BLEU (Papineni et al., 2002), GLEU (Wu et al., 2017), TER (Snover et al., 2006)) instead of the cross-entropy objective to encourage NAT model to generate high quality sentences rather than the correct token for each position. Secondly, we propose an innovative Transformer decoder named FS-decoder to fuse the target sequential information into the top layer of the decoder. The bottom layers of the FS-decoder run in parallel to keep the decoding speed and the top layer of the FS-decoder can exploit target sequential information to guide the target words generation procedure. We conduct experiments on three machine translation tasks (IWSLT16 En→De, WMT14 En↔De, WMT16 En→Ro) to validate our proposed approaches. Experimental results show that the Reinforce-NAT surpasses the baseline NAT system by a significant margin on the translation quality without decelerating the decoding speed, and the FS-decoder achieves comparable translation capacity to the autoregressive Transformer with considerable speedup. 2 Background 2.1 Autoregressive Neural Machine Translation Given a source sentence X = {x1, ..., xn} and a target sentence Y = {y1, ..., yT }, autoregressive NMT models the translation probability from X to Y as: P(Y |X, θ) = T Y t=1 p(yt|y<t, X, θ), (1) where θ is a set of model parameters and y<t = {y1, · · · , yt−1} is the translation history. Given the training set D = {XM, YM} with M sentence pairs, the training objective is to maximize the loglikelihood of the training data as: θ = arg max θ {L(θ)} L(θ) = M X m=1 T X t=1 log(p(ym t |ym <t, Xm, θ)), (2) where the superscript m indicates the m-th sentence in the dataset. During training, golden target words are fed into the decoder as the translation history. During inference, the partial translation generated by decoding algorithms such as greedy search and beam search is fed into the decoder to guide the generation of the next word. The prominent feature of the autoregressive model is that it requires the target side historical information in the decoding procedure. Therefore target words are generated in the one-by-one style. Due to the autoregressive property, the decoding speed is limited, which restricts the application of the autoregressive model. 2.2 Sequence-Level Training for Autoregressive NMT Reinforcement learning techniques (Sutton et al., 2000; Ng et al., 1999; Sutton, 1984) have been widely applied to improve the performance of the autoregressive NMT with sequence-level objectives (Shen et al., 2016; Ranzato et al., 2015; Bahdanau et al., 2016). As sequence-level objectives are usually non-differentiable, the loss function is defined as the negative expected reward: Lθ = − X Y=y1:T p(Y|X, θ) · r(Y), (3) where Y = y1:T denotes possible sequences generated by the model, and r(Y) is the corresponding reward such as BLEU, GLEU and TER for generating sequence Y. Enumerating all the possible target sequences is impossible due to the exponential search space, and REINFORCE (Williams, 1992) gives an elegant way to estimate the gradient for Eq.(3) via sampling a sequence Y from the probability distribution and estimate the gradient with the gradient of log-probability weighted by the reward r(Y): ∇θLθ = −E Y[ T X t=1 ∇θ log(p(yt|y<t, X, θ)) · r(Y)]. (4) Current reinforcement learning (RL) methods are designed for autoregressive models. Moreover, previous investigations (Wu et al., 2018; Weaver and Tao, 2013) show that the RL-based training procedure is unstable due to its high variance of gradient estimation. 3015 2.3 Non-Autoregressive Neural Machine Translation Non-autoregressive neural machine translation (Gu et al., 2017a) is proposed to accelerate the decoding process, which can simultaneously generate target words by discarding the autoregressive mechanism. The translation probability from X to Y is modeled as follows: P(Y |X, θ) = T Y t=1 p(yt|X, θ). (5) Given the training set D = {XM, YM} with M sentence pairs, the training objective is to maximize the log-likelihood of the training data as: θ = arg max θ {L(θ)} L(θ) = M X m=1 T X t=1 log(p(ym t |Xm, θ)). (6) During decoding, the translation with maximum likelihood can be easily obtained by taking the word with the maximum likelihood in every time step: ˆyt = arg max yt p(yt|X, θ) (7) NAT models do not utilize the target translation history, which results in its weakness in exploiting the target words collocation knowledge for generating correct target word sequence under the crossentropy objective function. Compared to autoregressive models, NAT models achieve significant speedup while suffering from a large gap in the translation quality due to the lack of target sequential information. 3 Approaches To retrieve the sequential information for NAT models for enhancing their translation ability and meanwhile preserving the fast-decoding property, we present two approaches: sequence-level training with a reinforcement algorithm for NAT models (Reinforce-NAT) to exploit the sequential information, and a novel Transformer decoder named FS-decoder to fuse sequential information into the top layer. 3.1 Sequence-Level Training for NAT Models Word-level objective functions, such as the crossentropy loss, focus on generating the correct token in each position, which will be inferior for NATs without the target sequential information. We propose to encourage NAT models to generate highquality sentences rather that correct words with the sequence-level training algorithm (ReinforceNAT). Algorithm Derivation In this section, we present the derivation of Reinforce-NAT and show its low variance and efficiency. We first introduce the REINFORCE algorithm (Williams, 1992) for NAT models. In NAT models, with the non-autoregressive translation probability defined in Eq.(5), the gradient of the expected loss is: ∇θLθ = − X Y ∇θ T Y t=1 p(yt|X, θ) · r(Y). (8) Directly applying the REINFORCE algorithm to Eq.(8) will make the gradient update in every postion guided by the same sentence reward r(Y), which is similar to the method for autoregressive models and is unstable during training. Instead, for NAT models, Eq.(8) can be further reduced to the following form, which is the gradient of target words probability weighted by their corresponding expected rewards1: ∇θLθ = − T X t=1 X yt ∇θp(yt|X, θ) · r(yt), (9) where r(yt) is the expected reward when yt is fixed: r(yt) = E y1:t−1 E yt+1:T r(Y). (10) In Eq.(9), the predicted word yt in position t is evluated by its corresponding expected reward r(yt), which is more accurate than the sentence reward r(Y). The r(yt) can be estimated by Monte Carlo sampling, as illustrated in algorithm 1. Specifically, we fix yt in position t and sample other words from the probability distribution p(·|X, θ)) for n times. The estimated value of r(yt) is the average reward of the n sampled sentences. Notice that the expected reward r(yt) can be estimated without running the decoder for multiple times, which is a major advantage of NAT models in sequence-level training. 1The proof is provided in the appendix 3016 Algorithm 1 Estimation of r(yt) Input: the output probability distribution p(·|X, θ)), t, yt, T, sampling times n Output: estimate of r(yt) 1: r = 0, i = 0 2: for i < n do 3: sample ˜y1:t−1, ˜yt+1:T from p(·|X, θ)) 4: ˜Y = {˜y1:t−1, yt, ˜yt+1:T } 5: r += r( ˜Y) 6: i += 1 7: r = r/n 8: return r The gradient in Eq.(9) can be estimated with REINFORCE (Williams, 1992): ∇θLθ = − T X t=1 E yt[∇θ log(p(yt|X, θ)) · r(yt)]. (11) Eq.(11) corresponds to a gradient estimation method through sampling a target word yt and the gradient of the log-probability of yt weighted by reward r(yt) is utilized to estimate the expected gradient over the vocabulary. Though the estimation is unbiased, the gradient estimator still suffers from high variance. The variance can be eliminated by traversing the whole vocabulary, but it is unaffordable due to the huge vocabulary size. The probability distribution over the target vocabulary is usually a centered distribution where the top-ranking words occupy the central part of the distribution, and the softmax layer ensures that other words with small probabilities have small gradients2. Hence the variance will be effectively reduced if we can eliminate the variance from topranking words. This motivates us to compute gradients of the top-ranking words accurately and estimate the rest via the REINFORCE algorithm. We can build an unbiased estimation of Eq.(9) by traversing top-k words and estimating the rest via one sampling: ∇θLθ = − T X t=1 ( X yt∈TK ∇θp(yt|X, θ) · r(yt) + (1 −Pk) · E yt∼˜p[∇θ log(p(yt|X, θ)) · r(yt)]). (12) Algorithm 2 illustrates the proposed method. Although this algorithm will lead to multiple es2In the softmax layer, the gradient is proportional to the output probability Algorithm 2 Reinforce-NAT Input: the output probability distribution p(·|X, θ)), traversing count k, sample times n Output: estimate of ∇θLθ in position t according to Eq.(12) 1: TK = {words ranking top-k in p(·|X, θ))} 2: ∇θLθ = 0, ˜p = p, Pk = 0 3: for yt in TK do 4: estimate r(yt) by algorithm 1 with sample times n 5: ∇θLθ -= ∇θp(yt|X, θ) · r(yt) 6: ˜p(yt|X, θ) = 0 7: Pk += p(yt|X, θ) 8: normalize ˜p(·|X, θ) 9: sample yt from ˜p(·|X, θ) 10: estimate r(yt) by algorithm 1 with sample times n 11: ∇θLθ -= (1 −Pk) · ∇θ log(p(yt|X, θ)) · r(yt) 12: return ∇θLθ timations of the expected reward r(yt), the training cost is relatively low for the reason that the independent generation of target words makes NAT models efficient in estimating the expected reward, which will be either very expensive (Yu et al., 2017) or biased (Bahdanau et al., 2016) for autoregressive models. Reinforce-NAT To give the clear description, we firstly define symbols in Algorithm 2: 1) p(·|X, θ)) is the output probability distribution generated by the decoder on the target vocabulary at time t. 2) TK is the set of target words with top-k probabilities. 3) Pk is the sum of probabilities in TK, 4) ˜p is the normalized probability distribution after removing probabilities of words in TK. The algorithm takes the output probability distribution p, the traversing count k and the sampling times n as input and output the gradient estimation at step t. We divide the gradient estimation procedure at step t into two parts: traversing and sampling. The algorithm firstly builds the set TK with words ranking top-k in probability (line 1), then estimates expected rewards for words in TK by algorithm 1 (line 3, line 4). The accumulated gradient in TK are obtained by traversing the words in TK and accumulating gradients of their probability functions, which are weighted by correspond3017 ing rewards (line 5). After the traversing procedure for accumulating gradients for words in TK, the algorithm estimates the expected gradient for words that are not in TK in the sampling procedure. The algorithm obtains the probability distribution ˜p over the rest of words through masking probabilities of words in the Tk (line 6, line8). A word yt from the distribution ˜p (line 9) is sampled to compute the gradient of the log-probability of yt and then estimate the reward of r(yt). The weight for this estimation is 1−Pk, where Pk is the sum of probabilities in TK. Finally, the estimated gradient is the sum of gradients from Top-k words and the sampled word. (line 11). In a word, the algorithm aims to traverse gradients of important words since they can dominate the gradient estimation, and estimate the gradient of less important words via one sampling. 3.2 Fuse Sequential Information We propose an innovative Transformer decoder named FS-decoder to fuse the target sequential information into the top layer of the decoder. The FS-decoder consists of four parts: bottom layers, the fusion layer, the top layer and the softmax layer. In the decoder, we parallelize bottom layers in an non-autoregressive way to accelerate the model but serialize the top layer in an autoregressive way to enhance the translation quality. The teacher forcing algorithm (Williams and Zipser, 1989) is applied in the training where target embeddings are directly fed to the fusion layer. During decoding, FS-decoder only needs to run the top layer autoregressively. We illustrate the model in figure 1 and describe the detailed architecture of the FS-decoder in the following. Assume that the original Transformer has n decoder layers, the source sentence has length Ts, the target sentence has length T, and the predicted target length is T ′. Here we directly look up the source-target length dictionary to predict the target length. Bottom Layers. The decoder of FS-decoder contains n-1 bottom layers, which are identical to the decoder layers of NAT models (Gu et al., 2017a). Each layer consists of four sub-layers: the self-attention layer, the positional attention layer, the source side attention layer and the positionwise feed-forward layer. The inputs for bottom decoders X ′ are uniformly copied (Gu et al., 2017a) Figure 1: The architecture of FS-decoder. The decoder consists of n−1 bottom layers, the fusion layer, the top layer and the softmax layer. from the source input X where each decoder input in position t is a copy of the source input in position Round(T ′t/Ts): X ′ = Uniform(X). (13) The bottom layers take the inputs X ′ and output the hidden states H ′ with the same length T ′. Fusion Layer. The fusion layer is a linear transformation layer with a ReLU activation, which fuses the outputs from bottom layers H ′ and target embeddings Y in each position t as: Ht = ReLu(WH ′ t + UYt), (14) where W and U are weight matrices, t = 1, 2, · · · , T. H ′ will be padded to length T when T ′ is smaller than T. Outputs of the fusion layer are then fed to the top layer. Top Layer. The top layer of the decoder is identical to the original Transformer decoder layer, which does not contain the positional attention layer compared to bottom layers. The outputs are fed to the softmax layer. 3018 Like other autoregressive models, FS-decoder has to generate translations through decoding algorithms such as greedy search and beam search. During decoding, bottom layers run in advance to prepare the inputs for the fusion layer, and then the fusion layer and top layer run autoregressively with the embedding of predicted token fed to the fusion layer. 4 Related Work Gu et al. (2017a) introduced the nonautoregressive Transformer model to accelerate the translation. Lee et al. (2018) proposed a nonautoregressive sequence model based on iterative refinement, where the outputs of the decoder are fed back as inputs in the next iteration. Guo et al. (2018) proposed to enhance the decoder inputs with phrase-table lookup and embedding mapping. Kaiser et al. (2018) used a sequence of autoregressively generated discrete latent variables as inputs of the decoder. Knowledge distillation (Hinton et al., 2015; Kim and Rush, 2016) is a method for training a smaller and faster student network to perform better by learning from a teacher network, which is crucial in NAT models. Gu et al. (2017a) applied Sequence-level knowledge distillation to eliminate the multimodality in the training corpus. Li et al. (2018) further proposed to improve non-autoregressive models through distilling knowledge from intermediary hidden states and attention weights of autoregressive models. Apart from non-autoregressive translation, there are works toward speeding up the translation from other perspectives. Wang et al. (2018) proposed the semi-autoregressive Transformer that generates a group of words in parallel at each time step. Press and Smith (2018) proposed the eager translation model that does not use the attention mechanism and has low latency. Zhang et al. (2018a) proposed the average attention network to accelerate decoding, which achieves significant speedup over the uncached Transformer. Zhang et al. (2018b) proposed cube pruning to speedup the beam search for neural machine translation without damaging the translation quality. Sequence-level training techniques have been widely explored in autoregressive neural machine translation, where most works (Ranzato et al., 2015; Shen et al., 2016; Wu et al., 2016; He et al., 2016; Wu et al., 2017; Yang et al., 2017) relied on reinforcement learning (Williams, 1992; Sutton et al., 2000) to build the gradient estimator. Recently, techniques for sequence-level training with continuous objectives have been explored, including deterministic policy gradient algorithms (Gu et al., 2017b), bag-of-words objective (Ma et al., 2018) and probabilistic n-gram matching (Shao et al., 2018). However, to the best of our knowledge, sequence-level training has not been applied to non-autoregressive models yet. The methods of variance reduction through focusing on the important parts of the distribution include importance sampling (Bengio et al., 2003; Glynn and Iglehart, 1989) and complementary sum sampling (Botev et al., 2017). Importance sampling estimates the properties of a particular distribution through sampling on a different proposal distribution. Complementary sum sampling reducdes the variance through suming over the important subset and estimating the rest via sampling. 5 Experiments 5.1 Settings Dataset. We conduct experiments on three translation tasks3: IWSLT16 En→De (196k pairs), WMT14 En↔De (4.5M pairs) and WMT16 En↔Ro (610k pairs). We use the preprocessed datasets released by Lee et al. (2018), where all sentences are tokenized and segmented into subword units using the BPE algorithm (Sennrich et al., 2016). For all tasks, source and target languages share the vocabulary with size 40k. For WMT14 En-De, we employ newstest-2013 and newstest-2014 as development and test sets. For WMT16 En-Ro, we take newsdev-2016 and newstest-2016 as development and test sets. For IWSLT16 En-De, we use the test2013 for validation. Baselines. We take the Transformer model (Vaswani et al., 2017) as the autoregressive baseline. The non-autoregressive model based on iterative refinement (Lee et al., 2018) is the nonautoregressive baseline, and we set the number of iterations to 2. Pre-train. To evaluate the sequence-level training methods, we pre-train the NAT baseline first and then fine-tune the baseline model with GLEU 3We release the source code in https://github.com/ictnlp/RSI-NAT 3019 IWSLT’16 En-De WMT’16 En-Ro WMT’14 En-De En→ toks/s speedup secs/b En→ Ro→ toks/s speedup En→ De→ toks/s speedup AR b=1 28.13 45.3 1.09× 0.20 31.53 31.35 45.6 1.23× 23.67 28.04 33.7 1.13× b=4 28.25 41.6 1.00× 0.20 31.85 31.60 37.1 1.00× 24.29 28.86 29.9 1.00× NAT FT 26.52 – 15.6 × – 27.29 29.06 – – 17.69 21.47 – – FT+NPD 28.16 – 2.36 × – 29.79 31.44 – – 19.17 23.20 – – IRNAT iter=2 24.82 423.8 6.64 × – 27.10 28.15 332.7 7.68 × 16.95 20.39 393.6 8.77 × adaptive 27.01 125.9 1.97 × – 29.66 30.30 118.3 2.73 × 21.54 25.43 107.2 2.39 × Our Models NAT-base 24.13 350.2 8.42× 0.62 25.96 26.49 349.0 9.41× 16.05 19.46 321.7 10.76× +REINFORCE 24.30 354.1 8.51× 2.51 26.49 27.20 346.7 9.35× 18.47 21.89 323.2 10.81× +Reinforce-NAT 25.18 350.6 8.43× 13.40 27.09 27.93 350.3 9.44× 19.15 22.52 320.9 10.73× FS-decoder(b=1) 27.58 168.7 4.06× 0.241 30.53 30.68 170.5 4.60× 21.53 27.20 143.3 4.79× FS-decoder(b=4) 27.78 140.8 3.38× 0.241 30.57 30.83 137.1 3.70× 22.27 27.25 112.2 3.75× Table 2: Generation quality (4-gram BLEU), decoding efficiency (tokens/sec), speedup and training speed (seconds/batch). Decoding efficiency is measured sentence-by-sentence from the En→direction. Speedup is calculated over the autoregressive Transformer with beam size 4. NAT: non-autoregressive transformer models (Gu et al., 2017a). IRNAT: iterative refinement for NAT (Lee et al., 2018). AR: the autoregressive Transformer model. b: beam size. FS-decoder: fuse the sequential information into the top layer. NAT-base: our non-autoregressive baseline. +REINFORCE: finetune the NAT-base with REINFORCE according to Eq.(11). +Reinforce-NAT: finetune the NAT-base with Reinforce-NAT according to Eq.(12). (Wu et al., 2016), which outperforms other metrics in our experiments. We stop the pre-train procedure, when training steps are more than 300k and no further improvements on the validation set are observed in last 100k steps. Hyperparameters. We closely follow the setting of Gu et al. (2017a) and Lee et al. (2018). In IWSLT16 En-De, we use the small model (dmodel=278, dhidden=507, nlayer=5, nhead=2, pdropout=0.1, twarmup=746). For experiments on WMT datasets, we use the base Transformer Vaswani et al. (2017) (dmodel=512, dhidden=512, nlayer=6, nhead=8, pdropout=0.1, twarmup=16000). The traversing count k and the sampling times n in algorithm 2 are respectively set to 5 and 20. We use Adam (Kingma and Ba, 2014) for the optimization. During decoding, we remove any token that is generated repeatly. The decoding speed is measured on a single Geforce GTX TITAN X. Knowledge Distillation. Knowledge distillation (Kim and Rush, 2016; Hinton et al., 2015) is proved to be crucial for successfully training NAT models (Gu et al., 2017a; Li et al., 2018). For all the translation tasks, we apply sequence-level knowledge distillation to construct the distillation corpus where the target side of the training corpus is replaced by the output of an autoregressive Transformer model. We use original corpora to train the autoregressive baseline and distillation corpora to train other models. 5.2 Main Results We compare our models with the NAT (Gu et al., 2017a) and the IRNAT (Lee et al., 2018). Table 2 shows the experiment results. We observe that models based on sequence-level training approaches, including REINFORCE and ReinforceNAT, significantly surpass the NAT baseline on BLEU without damaging the decoding speed. The Reinforce-NAT model outperforms the REINFORCE model in terms of BLEU points. On WMT14 En↔De, the Reinforce-NAT model achieves significant improvements by more than 3 BLEU points and outperforms NAT(FT) (Gu et al., 2017a) and IRNAT(iteration=2) (Lee et al., 2018). The above results demonstrate the effectiveness of sequence-level training and prove the strong ability of Reinforce-NAT. The experiment on the FS-decoder show that it brings huge BLEU improvements over the NAT baseline and even achieves comparable performance to the autoregressive Transformer with considerable speedup, which proves the capacity of the FS-decoder. 5.3 Training Speed Table 2 shows the training time per batch of our methods. Sequence-level training methods (i.e., REINFORCE and Reinforce-NAT) are slower than the word-level training. The bottleneck lies in the calculation of the reward (i.e., GLEU), which takes place in CPU and can be accelerated by multi-processing. Besides, these methods are only 3020 utilized to fine-tune the baseline model and take less than 10,000 batches to converge, which make the relatively low training speed affordable. 5.4 Effect of top-k size in Reinforce-NAT The Reinforce-NAT is proposed on the basis that the top-k words can occupy the central part of the probability distribution. However, it remains unknown which k is appropriate for us. A large k will slow down the training, and a small k will be not enough to dominate the probability distribution. We statistically and experimentally analyze the choice of k in Reinforce-NAT. We respectively set k to 1, 5 and 10 and record the topk probabilities in 10,000 target word predictions. Figure 2 and Table 3 illustrate the statistical properties of top-k probabilities. In figure 2, the x-axis divides the probability distribution into 5 intervals, and the y-axis indicates the number of times that the top-k probabilities are within this interval. In Table 3, we estimate the expection of top-k probabilities for different k. We find that k = 5 is a desirable choice that can cover a large portion of the probability distribution, and the marginal utility for a larger k is limitted. Figure 2: top-k probability distributions for k=1, 5 and 10 k 1 5 10 100 1000 E[Pk] 0.818 0.916 0.929 0.948 0.968 Table 3: top-k probability expection for k=1, 5, 10, 100, 1000 We further conduct experiments on IWSLT16 En→De to confirm the conclusion. We respectively set k to 0, 1, 5 and 10 in Reinforce-NAT and draw training curves. Figure 3 shows that REINFORCE(k = 0) is very unstable in the training, and greater k in Reinforce-NAT generally leads to better performance. In line with our previous conclusion, k = 5 is an ideal choice since it does not have a large performance gap between larger k. Figure 3: training curves for k = 0, 1, 5 and 10. 5.5 Performance over Different Lengths Table 2 shows that the performance of ReinforceNAT varies with datasets. Though IWSLT16 En→De and WMT14 En→De have the same language pair, Reinforce-NAT achieves an improvement of more than 3 BLEU points on WMT14 but only have about 1.0 BLEU points improvement on IWSLT16. We attribute this phenomenon to the length difference between two datasets. The WMT14 En→De dataset is in the news-domain, whose sentences are statistically longer than the spoken-domain IWSLT16 En→De dataset. Figure 4 shows BLEU scores over sentences in different length buckets. The BLEU scores of NAT-Base have a distinct decrease when the sentence length is over 40, while other models perform well on long sentences. It confirms that NAT models are weak in translating long sentences and our solutions can effectively improve the performance of NAT models on long sentences through leveraging sequential information. 5.6 Case Study In Table 4, we present a translation case from the validation set of WMT14 De→En. The case shows that the translation quality rise in the order of NAT-Base, +Reinforce-NAT, FS-decoder to AR-Base and the performance gap is large between NAT-Base and other models. Particularly, NAT models suffer from over-translation and 3021 Source und noch tragischer ist , dass es Oxford war - eine Universitt , die nicht nur 14 Tory-Premierminister hervorbrachte , sondern sich bis heute hinter einem unverdienten Ruf von Gleichberechtigung und Gedankenfreiheit versteckt . Target even more tragic is that it was Oxford , which not only produced 14 Tory prime ministers , but , to this day , hides behind an ill-deserved reputation for equality and freedom of thought . NAT-Base and more more more more that it was Oxford - a university that not not only only TTory Prime Minister , but has has to hidden hidden behind an unfounded reputation of equality and freedom of thought . Reinforce-NAT and more more tragic is that it was Oxford - a university that did not only produce 14 Tory Prime Minister , but has still to be hidden behind an unfied reputation of equality and freedom of thought . FS-decoder and even more tragic , it was Oxford - a university that produced not only 14 Tory Prime Minister , but still hidden behind an unbridled reputation of equality and freedom of thought . AR-Base and , more tragic , Oxford was - a university that not only produced 14 Tory Prime Minister , but still hidden behind an unprecedented reputation for equality and freedom of thought . Table 4: A translation case on WMT14 De→En task. Over-translation and under-translation errors occur in the translation of NAT-Base. Figure 4: The BLEU scores on the validation set of WMT14 En→De over sentences in different length buckets. The beam size of FS-decoder and AR-Base is 1. under-translation when translating long sentences, which is efficiently alleviated by Reinforce-NAT and RF-Decoder. 6 Conclusion In this paper, we aim to retrieve the sequential information for NAT models to enhance their translation ability while preserving fast-decoding property. Firstly, we propose a sequence-level training method based on a novel reinforcement algorithm for NAT (Reinforce-NAT), which significantly improves the performance of NAT models without decelerating the decoding speed. Secondly, we propose an innovative Transformer decoder named FS-decoder to fuse the target sequential information into the top layer of the decoder, which achieves comparable performance to the Transformer and still maintains substantial speedup. In the future, we plan to investigate better methods to leverage the sequential information. We believe that the following two directions are worth study. First, exploiting other sequencelevel training objectives like bag-of-words (Ma et al., 2018). Second, using sequential information distilled from the autoregressive teacher model to guide the training of the student nonautoregressive model. 7 Acknowledgments We thank the anonymous reviewers for their insightful comments. This work was supported by National Natural Science Foundation of China (NO.61662077, NO.61876174) and National Key R&D Program of China (NO.YS2017YFGH001428). References Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yoshua Bengio, Jean-S´ebastien Sen´ecal, et al. 2003. Quick training of probabilistic neural nets by importance sampling. In AISTATS, pages 1–9. Aleksandar Botev, Bowen Zheng, and David Barber. 2017. Complementary sum sampling for likelihood approximation in large scale classification. In Artificial Intelligence and Statistics, pages 1030–1038. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning 3022 phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Peter W Glynn and Donald L Iglehart. 1989. Importance sampling for stochastic simulations. Management Science, 35(11):1367–1392. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017a. Nonautoregressive neural machine translation. arXiv preprint arXiv:1711.02281. Jiatao Gu, Kyunghyun Cho, and Victor OK Li. 2017b. Trainable greedy decoding for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1968–1978. Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2018. Non-autoregressive neural machine translation with enhanced decoder input. arXiv preprint arXiv:1812.09664. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Łukasz Kaiser, Aurko Roy, Ashish Vaswani, Niki Pamar, Samy Bengio, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. arXiv preprint arXiv:1803.03382. Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. arXiv preprint arXiv:1802.06901. Zhuohan Li, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2018. Hint-based training for nonautoregressive translation. Shuming Ma, Xu Sun, Yizhong Wang, and Junyang Lin. 2018. Bag-of-words as target for neural machine translation. arXiv preprint arXiv:1805.04871. Andrew Y. Ng, Daishi Harada, and Stuart J. Russell. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Ofir Press and Noah A. Smith. 2018. You may not need attention. CoRR, abs/1810.13409. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Chenze Shao, Xilin Chen, and Yang Feng. 2018. Greedy search with probabilistic n-gram matching for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4778–4784. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1683–1692. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, volume 200. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063. Richard Stuart Sutton. 1984. Temporal credit assignment in reinforcement learning. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Chunqi Wang, Ji Zhang, and Haiqing Chen. 2018. Semi-autoregressive neural machine translation. arXiv preprint arXiv:1808.08583. 3023 Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. arXiv preprint arXiv:1902.10245. Lex Weaver and Nigel Tao. 2013. The optimal reward baseline for gradient-based reinforcement learning. Processings of the Seventeeth Conference on Uncertainty in Artificial Intelligence. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. In Reinforcement Learning, pages 5–32. Springer. Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270– 280. Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and TieYan Liu. 2018. A study of reinforcement learning for neural machine translation. arXiv preprint arXiv:1808.08866. Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2017. Adversarial neural machine translation. arXiv preprint arXiv:1704.06933. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2017. Improving neural machine translation with conditional sequence generative adversarial nets. arXiv preprint arXiv:1703.04887. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, pages 2852–2858. Biao Zhang, Deyi Xiong, and Jinsong Su. 2018a. Accelerating neural transformer via an average attention network. arXiv preprint arXiv:1805.00631. Wen Zhang, Liang Huang, Yang Feng, Lei Shen, and Qun Liu. 2018b. Speeding up neural machine translation decoding by cube pruning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4284–4294. 3024 A Supplemental Material Proof for Eq.(9): −∇θLθ = X Y ∇θ T Y t=1 p(yt|X, θ) · r(Y) = X Y T X t=1 ∇θp(yt|X, θ) · t−1 Y i=1 p(yi|X, θ) · T Y j=t+1 p(yj|X, θ) · r(Y) = T X t=1 X Y ∇θp(yt|X, θ) · t−1 Y i=1 p(yi|X, θ) · T Y j=t+1 p(yj|X, θ) · r(Y) = T X t=1 X yt ∇θp(yt|X, θ) · X y1:t−1 X yt+1:T t−1 Y i=1 p(yi|X, θ) · T Y j=t+1 p(yj|X, θ) · r(Y) = T X t=1 X yt ∇θp(yt|X, θ) · E y1:t−1 E yt+1:T r(Y). = T X t=1 X yt ∇θp(yt|X, θ) · r(yt) (15)
2019
288
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025–3036 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3025 STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework ∗ Mingbo Ma1,3 Liang Huang1,3 Hao Xiong2 Renjie Zheng3 Kaibo Liu1,3 Baigong Zheng1 Chuanqiang Zhang2 Zhongjun He2 Hairong Liu1 Xing Li1 Hua Wu2 Haifeng Wang2 1Baidu Research, Sunnyvale, CA, USA 2Baidu, Inc., Beijing, China 3Oregon State University, Corvallis, OR, USA {mingboma, lianghuang, xionghao05, hezhongjun}@baidu.com Abstract Simultaneous translation, which translates sentences before they are finished, is useful in many scenarios but is notoriously difficult due to word-order differences. While the conventional seq-to-seq framework is only suitable for full-sentence translation, we propose a novel prefix-to-prefix framework for simultaneous translation that implicitly learns to anticipate in a single translation model. Within this framework, we present a very simple yet surprisingly effective “wait-k” policy trained to generate the target sentence concurrently with the source sentence, but always k words behind. Experiments show our strategy achieves low latency and reasonable quality (compared to full-sentence translation) on 4 directions: zh↔en and de↔en. 1 Introduction Simultaneous translation aims to automate simultaneous interpretation, which translates concurrently with the source-language speech, with a delay of only a few seconds. This additive latency is much more desirable than the multiplicative 2× slowdown in consecutive interpretation. With this appealing property, simultaneous interpretation has been widely used in many scenarios including multilateral organizations (UN/EU), and international summits (APEC/G-20). However, due to the concurrent comprehension and production in two languages, it is extremely challenging and exhausting for humans: the number of qualified simultaneous interpreters worldwide is very limited, and each can only last for about 15-30 minutes in one turn, whose error rates grow exponentially after just minutes of interpreting (Moser-Mercer et al., 1998). Moreover, lim∗M.M. and L.H. contributed equally; L.H. conceived the main ideas (prefix-to-prefix and wait-k) and directed the project, while M.M. led the implementations on RNN and Transformer. See example videos, media reports, code, and data at https://simultrans-demo.github.io/. President Bush met with Putin in Moscow Bùshí
 ૲Ջ Bush zǒngtǒng
 ௛ᕹ President zài ࣁ at Mòsīkē ឭේᑀ Moscow yǔ Ө with Pǔjīng ฦՂ Putin huìwù տร meet prediction read write Source side → Target side → Figure 1: Our wait-k model emits target word yt given source-side prefix x1... xt+k−1, often before seeing the corresponding source word (here k=2, outputing y3=“met” before x7=“hu`ıw`u”). Without anticipation, a 5-word wait is needed (dashed arrows). See also Fig. 2. ited memory forces human interpreters to routinely omit source content (He et al., 2016). Therefore, there is a critical need to develop simultaneous machine translation techniques to reduce the burden of human interpreters and make it more accessible and affordable. Unfortunately, simultaneous translation is also notoriously difficult for machines, due in large part to the diverging word order between the source and target languages. For example, think about simultaneously translating an SOV language such as Japanese or German to an SVO language such as English or Chinese:1 you have to wait until the source language verb. As a result, existing so-called “real-time” translation systems resort to conventional full-sentence translation, causing an undesirable latency of at least one sentence. Some researchers, on the other hand, have noticed the importance of verbs in SOV→SVO translation 1 Technically, German is SOV+V2 in main clauses, and SOV in embedded clauses; Mandarin is a mix of SVO+SOV. 3026 B`ush´ı zˇongtˇong z`ai M`os¯ık¯e yˇu Pˇuj¯ıng hu`ıw`u 布 布 布什 什 什总 总 总统 统 统 在 在 在 莫 莫 莫斯 斯 斯科 科 科与 普京 会晤 Bush president in Moscow with/and Putin meet (a) simultaneous: our wait-2 ...wait 2 words... pres. bush met with putin in moscow (b) non-simultaneous baseline ..... wait whole sentence ...... pres. bush met with putin in moscow (c) simultaneous: test-time wait-2 ...wait 2 words... pres. bush in moscow and pol- ite meeting 布什总统 在莫斯科与 普京会晤 (d) simultaneous: non-predictive ...wait 2 words... pres. bush ..... wait 5 words ...... met with putin in moscow Figure 2: Another view of Fig. 1, highlighting the prediction of English “met” corresponding to the sentencefinal Chinese verb hu`ıw`u. (a) Our wait-k policy (here k = 2) translates concurrently with the source sentence, but always k words behind. It correclty predicts the English verb given just the first 4 Chinese words (in bold), lit. “Bush president in Moscow”, because it is trained in a prefix-to-prefix fashion (Sec. 3), and the training data contains many prefix-pairs in the form of (X z`ai Y ..., X met ...). (c) The test-time wait-k decoding (Sec. 3.2) using the full-sentence model in (b) can not anticipate and produces nonsense translation. (d) A simultaneous translator without anticipation such as Gu et al. (2017) has to wait 5 words. (Grissom II et al., 2016), and have attempted to reduce latency by explicitly predicting the sentencefinal German (Grissom II et al., 2014) or English verbs (Matsubarayx et al., 2000), which is limited to this particular case, or unseen syntactic constituents (Oda et al., 2015; He et al., 2015), which requires incremental parsing on the source sentence. Some researchers propose to translate on an optimized sentence segment level to get better translation accuracy (Oda et al., 2014; Fujita et al., 2013; Bangalore et al., 2012). More recently, Gu et al. (2017) propose a two-stage model whose base model is a full-sentence model, On top of that, they use a READ/WRITE (R/W) model to decide, at every step, whether to wait for another source word (READ) or to emit a target word using the pretrained base model (WRITE), and this R/W model is trained by reinforcement learning to prefer (rather than enforce) a specific latency, without updating the base model. All these efforts have the following major limitations: (a) none of them can achieve any arbitrary given latency such as “3-word delay”; (b) their base translation model is still trained on full sentences; and (c) their systems are complicated, involving many components (such as pretrained model, prediction, and RL) and are difficult to train. We instead present a very simple yet effective solution, designing a novel prefix-to-prefix framework that predicts target words using only prefixes of the source sentence. Within this framework, we study a special case, the “wait-k” policy, whose translation is always k words behind the input. Consider the Chinese-to-English example in Figs. 1–2, where the translation of the sentencefinal Chinese verb hu`ıw`u (“meet”) needs to be emitted earlier to avoid a long delay. Our wait-2 model correctly anticipates the English verb given only the first 4 Chinese words (which provide enough clue for this prediction given many similar prefixes in the training data). We make the following contributions: • Our prefix-to-prefix framework is tailored to simultaneous translation and trained from scratch without using full-sentence models. • It seamlessly integrates implicit anticipation and translation in a single model that directly predicts target words without explictly hallucinating source ones. • As a special case, we present a “wait-k” policy that can satisfy any latency requirements. • This strategy can be applied to most sequence-to-sequence models with relatively minor changes. Due to space constraints, we only present its performance the Transformer (Vaswani et al., 2017), though our initial experiments on RNNs (Bahdanau et al., 2014) showed equally strong results (see our November 2018 arXiv version https: //arxiv.org/abs/1810.08398v3). • Experiments show our strategy achieves low latency and reasonable BLEU scores (compared to full-sentence translation baselines) on 4 directions: zh↔en and de↔en. 2 Preliminaries: Full-Sentence NMT We first briefly review standard (full-sentence) neural translation to set up the notations. Regardless of the particular design of different seq-to-seq models, the encoder always takes 3027 … … wait whole source sentence … 1 2 source: target: 4 1 2 3 5 seq-to-seq 4 1 2 3 … wait k words 1 2 source: target: 5 prefix-to-prefix
 (wait-k) Figure 3: Seq-to-seq vs. our prefix-to-prefix frameworks (showing wait-2 as an example). the input sequence x = (x1, ..., xn) where each xi ∈Rdx is a word embedding of dx dimensions, and produces a new sequence of hidden states h = f(x) = (h1, ..., hn). The encoding function f can be implemented by RNN or Transformer. On the other hand, a (greedy) decoder predicts the next output word yt given the source sequence (actually its representation h) and previously generated words, denoted y<t = (y1, ..., yt−1). The decoder stops when it emits <eos>, and the final hypothesis y = (y1, ..., <eos>) has probability p(y | x) = Q|y| t=1 p(yt | x, y<t) (1) At training time, we maximize the conditional probability of each ground-truth target sentence y⋆ given input x over the whole training data D, or equivalently minimizing the following loss: ℓ(D) = −P (x,y⋆)∈D log p(y⋆| x) (2) 3 Prefix-to-Prefix and Wait-k Policy In full-sentence translation (Sec. 2), each yi is predicted using the entire source sentence x. But in simultaneous translation, we need to translate concurrently with the (growing) source sentence, so we design a new prefix-to-prefix architecture to (be trained to) predict using a source prefix. 3.1 Prefix-to-Prefix Architecture Definition 1. Let g(t) be a monotonic nondecreasing function of t that denotes the number of source words processed by the encoder when deciding the target word yt. For example, in Figs. 1–2, g(3) = 4, i.e., a 4word Chinese prefix is used to predict y3=“met”. We use the source prefix (x1, ..., xg(t)) rather than the whole x to predict yt: p(yt | x≤g(t), y<t). Therefore the decoding probability is: pg(y | x) = Q|y| t=1 p(yt | x≤g(t), y<t) (3) and given training D, the training objective is: ℓg(D) = −P (x, y⋆)∈D log pg(y⋆| x) (4) Generally speaking, g(t) can be used to represent any arbitrary policy, and we give two special cases where g(t) is constant: (a) g(t) = |x|: baseline full-sentence translation; (b) g(t) = 0: an “oracle” that does not rely on any source information. Note that in any case, 0 ≤g(t) ≤|x| for all t. Definition 2. We define the “cut-off” step, τg(|x|), to be the decoding step when source sentence finishes: τg(|x|) = min{t | g(t) = |x|} (5) For example, in Figs. 1–2, the cut-off step is 6, i.e., the Chinese sentence finishes right before y6=“in”. Training vs. Test-Time Prefix-to-Prefix. While most previous work in simultaneous translation, in particular Bangalore et al. (2012) and Gu et al. (2017), might be seen as special cases in this framework, we note that only their decoders are prefix-to-prefix, while their training is still fullsentence-based. In other words, they use a fullsentence translation model to do simultaneous decoding, which is a mismatch between training and testing. The essence of our idea, however, is to train the model to predict using source prefixes. Most importantly, this new training implicitly learns anticipation as a by-product, overcoming word-order differences such as SOV→SVO. Using the example in Figs. 1–2, the anticipation of the English verb is possible because the training data contains many prefix-pairs in the form of (X z`ai Y ..., X met ...), thus although the prefix x≤4=“B`ush´ı zˇongtˇong z`ai M`osik¯e” (lit. “Bush president in Moscow”) does not contain the verb, it still provides enough clue to predict “met”. 3.2 Wait-k Policy As a very simple example within the prefix-toprefix framework, we present a wait-k policy, which first wait k source words, and then translates concurrently with the rest of source sentence, i.e., the output is always k words behind the input. This is inspired by human simultaneous interpreters who generally start translating a few seconds into the speakers’ speech, and finishes a few seconds after the speaker finishes. For example, if k = 2, the first target word is predicted using the first 2 source words, and the second target word 3028 using the first 3 source words, etc; see Fig. 3. More formally, its g(t) is defined as follows: gwait-k(t) = min{k + t −1, |x|} (6) For this policy, the cut-off point τgwait-k(|x|) is exactly |x| −k + 1 (see Fig. 14). From this step on, gwait-k(t) is fixed to |x|, which means the remaining target words (including this step) are generated using the full source sentence, similar to conventional MT. We call this part of output, y≥|x|−k, the “tail”, and can perform beam search on it (which we call “tail beam search”), but all earlier words are generated greedily one by one (see Appendix). Test-Time Wait-k. As an example of testtime prefix-to-prefix in the above subsection, we present a very simple “test-time wait-k” method, i.e., using a full-sentence model but decoding it with a wait-k policy (see also Fig. 2(c)). Our experiments show that this method, without the anticipation capability, performs much worse than our genuine wait-k when k is small, but gradually catches up, and eventually both methods approach the full-sentence baseline (k = ∞). 4 New Latency Metric: Average Lagging Beside translation quality, latency is another crucial aspect for evaluating simultaneous translation. We first review existing latency metrics, highlighting their limitations, aand then propose our new latency metric that address these limitations. 4.1 Existing Metrics: CW and AP Consecutive Wait (CW) (Gu et al., 2017) is the number of source words waited between two target words. Using our notation, for a policy g(·), the per-step CW at step t is CWg(t) = g(t)−g(t−1). The CW of a sentence-pair (x, y) is the average CW over all consecutive wait segments: CWg(x, y) = P|y| t=1 CWg(t) P|y| t=1 1CWg(t)>0 = |x| P|y| t=1 1CWg(t)>0 In other words, CW measures the average source segment length (the best case is 1 for wordby-word translation or our wait-1 and the worst case is |x| for full-sentence MT). The drawback of CW is that CW is local latency measurement which is insensitive to the actual lagging behind. Another latency measurement, Average Proportion (AP) (Cho and Esipova, 2016) measures the proportion of the area above a policy path in Fig. 1: Source→ Target→ 1 2 3 4 5 6 7 8 9 10 Source→ Target→ 1 2 3 4 5 6 7 8 9 10 11 12 13 Figure 4: Illustration of our proposed Average Lagging latency metric. The left figure shows a simple case when |x| = |y| while the right figure shows a more general case when |x| ̸= |y|. The red policy is wait4, the yellow is wait-1, and the thick black is a policy whose AL is 0. APg(x, y) = 1 |x| |y| P|y| t=1 g(t) (7) AP has two major flaws: First, it is sensitive to input length. For example, consider our wait-1 policy. When |x| = |y| = 1, AP is 1, and when |x| = |y| = 2, AP is 0.75, and eventually AP approaches 0.5 when |x| = |y| →∞. However, in all these cases, there is a one word delay, so AP is not fair between long and short sentences. Second, being a percentage, it is not obvious to the user the actual delays in number of words. 4.2 New Metric: Average Lagging Inspired by the idea of “lagging behind the ideal policy”, we propose a new metric called “average lagging” (AL), shown in Fig. 4. The goal of AL is to quantify the degree the user is out of sync with the speaker, in terms of the number of source words. The left figure shows a special case when |x| = |y| for simplicity reasons. The thick black line indicates the “wait-0” policy where the decoder is alway one word ahead of the encoder and we define this policy to have an AL of 0. The diagonal yellow policy is our “wait-1” which is always one word behind the wait-0 policy. In this case, we define its AL to be 1. The red policy is our wait-4, and it is always 4 words behind the wait-0 policy, so its AL is 4. Note that in both cases, we only count up to (but including) the cut-off point (indicated by the horizontal yellow/red arrows, or 10 and 7, resp.) because the tail can be generated instantly without further delay. More formally, for the ideal case where |x = |y|, we can define: ALg(x, y) = 1 τg(|x|) τg(|x|) X t=1 g(t) −(t −1) (8) 3029 We can infer that the AL for wait-k is exactly k. When we have more realistic cases like the right side of Fig. 4 when |x| < |y|, there are more and more delays accumulated when target sentence grows.For example, for the yellow wait-1 policy has a delay of more than 3 words at decoding its cut-off step 10, and the red wait-4 policy has a delay of almost 6 words at its cut-off step 7. This difference is mainly caused by the tgt/src ratio. For the right example, there are 1.3 target words per source word. More generally, we need to offset the “wait-0” policy and redefine: ALg(x, y) = 1 τg(|x|) τg(|x|) X t=1 g(t) −t −1 r (9) where τg(|x|) denotes the cut-off step, and r = |y|/|x| is the target-to-source length ratio. We observe that wait-k with catchup has an AL ≃k. 5 Implementation Details While RNN-based implementation of our wait-k model is straightforward and our initial experiments showed equally strong results, due to space constraints we will only present Transformerbased results. Here we describe the implementation details for training a prefix-to-prefix Transformer, which is a bit more involved than RNN. 5.1 Background: Full-Sentence Transformer We first briefly review the Transformer architecture step by step to highlight the difference between the conventional and simultaneous Transformer. The encoder of Transformer works in a self-attention fashion and takes an input sequence x, and produces a new sequence of hidden states z = (z1, ..., zn) where zi ∈Rdz is as follows: zi = Pn j=1 αij PWV(xj) (10) Here PWV(·) is a projection function from the input space to the value space, and αij denotes the attention weights: αij = exp eij Pn l=1 exp eil , eij = PWQ(xi)PWV(xj)T √dx (11) where eij measures similarity between inputs. Here PWQ(xi) and PWK(xj) project xi and xj to query and key spaces, resp. We use 6 layers of self-attention and use h to denote the top layer output sequence (i.e., the source context). On the decoder side, during training time, the gold output sequence y∗ = (y∗ 1, ..., y∗ m) goes through the same self-attention to generate hidden self-attended state sequence c = (c1, ..., cm). Note that because decoding is incremental, we let αij = 0 if j > i in Eq. 11 to restrict self-attention to previously generated words. In each layer, after we gather all the hidden representations for each target word through selfattention, we perform target-to-source attention: c′ i = Pn j=1 βij PWV′(hj) similar to self-attention, βij measures the similarity between hj and ci as in Eq. 11. 5.2 Training Simultaneous Transformer Simultaneous translation requires feeding the source words incrementally to the encoder, but a naive implementation of such incremental encoder/decoder is inefficient. Below we describe a faster implementation. For the encoder, during training time, we still feed the entire sentence at once to the encoder. But different from the self-attention layer in conventional Transformer (Eq. 11), we constrain each source word to attend to its predecessors only (similar to decoder-side self-attention), effectively simulating an incremental encoder: α(t) ij =    exp e(t) ij Pg(t) l=1 exp e(t) il if i, j ≤g(t) 0 otherwise e(t) ij = ( PWQ(xi) PWK(xj)T √dx if i, j ≤g(t) −∞ otherwise Then we have a newly defined hidden state sequence z(t) = (z(t) 1 , ..., z(t) n ) at decoding step t: z(t) i = Pn j=1 α(t) ij PWV(xj) (12) When a new source word is received, all previous source words need to adjust their representations. 6 Experiments 6.1 Datasets and Systems Settings We evaluate our work on four simultaneous translation directions: German↔English and Chinese↔English. For the training data, we use the parallel corpora available from WMT152 2http://www.statmt.org/wmt15/translation-task.html 3030 2 4 6 8 10 Average Lagging (de en) 15 20 25 30 1-ref BLEU k=1 k=1 k=3 k=3 k=5 k=5 k=7 k=7 k=9 k=9 28.6 wait-k test-time wait-k 2 4 6 Consecutive Wait (de en) 15 20 25 30 1-ref BLEU k=1 k=1 k=3 k=3 k=5 k=5 k=7 k=7 k=9 k=9 28.6 wait-k test-time wait-k Figure 5: Translation quality against latency metrics (AL and CW) on German-to-English simultaneous translation, showing wait-k and test-time wait-k results, full-sentence baselines, and our adaptation of Gu et al. (2017) (▶:CW=2; ▼:CW=5; ■:CW=8), all based on the same Transformer. ⋆$:full-sentence (greedy and beam-search). 2 4 6 8 Average Lagging (en de) 10 15 20 25 1-ref BLEU k=1 k=1 k=3 k=3 k=5 k=5 k=7 k=7 k=9 k=9 26.6 wait-k test-time wait-k 2 4 Consecutive Wait (en de) 10 15 20 25 1-ref BLEU k=1 k=1 k=3 k=3 k=5 k=5 k=7 k=7 k=9 k=9 26.6 wait-k test-time wait-k Figure 6: Translation quality against latency metrics on English-to-German simultaneous translation. 1 3 5 7 9 11 Average Lagging (zh en) 15 20 25 30 35 40 4-ref BLEU k=1 k=3 k=5 k=7 k=9 k=1 k=3 k=5 k=7 k=9 wait-k test-time wait-k 33.14 0 2 4 6 Consecutive Wait (zh en) 15 20 25 30 35 40 4-ref BLEU k=1 3 5 7 9 k=1 3 5 7 9 wait-k test-time wait-k 33.14 Figure 7: Translation quality against latency on Chinese-to-English simultaneous translation. 1 3 5 7 9 11 Average Lagging (en zh) 7.5 10.0 12.5 15.0 17.5 20.0 22.5 1-ref BLEU k=1 k=3 k=5 k=7 k=9 k=1 k=3 k=5 k=7 k=9 wait-k test-time wait-k 33.14 2 4 6 Consecutive Wait (en zh) 7.5 10.0 12.5 15.0 17.5 20.0 22.5 1-ref BLEU k=1 3 5 7 9 k=1 3 5 7 9 wait-k test-time wait-k 33.14 Figure 8: Translation quality against latency on English-to-Chinese, with encoder catchup (see Appendix A). 3031 Train Test k=1 k=3 k=5 k=7 k=9 k=∞ k′=1 34.1 33.3 31.8 31.2 30.0 15.4 k′=3 34.7 36.7 37.1 36.7 36.7 18.3 k′=5 30.7 36.7 37.8 38.4 38.6 22.4 k′=7 31.0 37.0 39.4 40.0 39.8 23.7 k′=9 26.4 35.6 39.1 40.1 41.0 28.6 k′=∞ 21.8 30.2 36.0 38.9 39.9 43.2 Table 1: wait-k policy in training and test (4-ref BLEU, zh→en dev set). The bottom row is “test-time wait-k”. Bold: best in a column; italic: best in a row. for German↔English (4.5M sentence pairs) and NIST corpus for Chinese↔English (2M sentence pairs). We first apply BPE (Sennrich et al., 2015) on all texts in order to reduce the vocabulary sizes. For German↔English evaluation, we use newstest-2013 (dev) as our dev set and newstest-2015 (test) as our test set, with 3,000 and 2,169 sentence pairs, respectively. For Chinese↔English evaluation, we use NIST 2006 and NIST 2008 as our dev and test sets. They contain 616 and 691 Chinese sentences, each with 4 English references. When translating from Chinese to English, we report 4-reference BLEU scores, and in the reverse direction, we use the second among the four English references as the source text, and report 1-reference BLEU scores. Our implementation is adapted from PyTorchbased OpenNMT (Klein et al., 2017). Our Transformer is essentially the same as the base model from the original paper (Vaswani et al., 2017). 6.2 Quality and Latency of Wait-k Model Tab. 1 shows the results of a model trained with wait-k′ but decoded with wait-k (where ∞means full-sentence). Our wait-k is the diagonal, and the last row is the “test-time wait-k” decoding. Also, the best results of wait-k decoding is often from a model trained with a slightly larger k′. Figs. 5–8 plot translation quality (in BLEU) against latency (in AL and CW) for full-sentence baselines, our wait-k, test-time wait-k (using fullsentence models), and our adaptation of Gu et al. (2017) from RNN to Transformer3 on the same Transformer baseline. In all these figures, we observe that, as k increases, (a) wait-k improves in BLEU score and worsens in latency, and (b) the 3 However, it is worth noting that, despite our best efforts, we failed to reproduce their work on their original RNN, regardless of using their code or our own. That being said, our successful implementation of their work on Transformer is also a notable contribution of this work. By contrast, it is very easy to make wait-k work on either RNN or Transformer. k=3 k=5 k=7 k=3 k=5 k=7 zh→en en→zh sent-level % 33 21 9 52 27 17 word-level % 2.5 1.5 0.6 5.8 3.4 1.4 accuracy 55.4 56.3 66.7 18.6 20.9 22.2 de→en en→de sent-level % 44 27 8 28 2 0 word-level % 4.5 1.5 0.6 1.4 0.1 0.0 accuracy 26.0 56.0 60.0 10.7 50.0 n/a Table 2: Human evaluation for all four directions (100 examples each from dev sets). We report sentence- and word-level anticipation rates, and the word-level anticipation accuracy (among anticipated words). gap between test-time wait-k and wait-k shrinks. Eventually, both wait-k and test-time wait-k approaches the full-sentence baseline as k →∞. These results are consistent with our intuitions. We next compare our results with our adaptation of Gu et al. (2017)’s two-staged full-sentence model + reinforcement learning on Transformer. We can see that while on BLEU-vs-AL plots, their models perform similarly to our test-time wait-k for de→en and zh→en, and slightly better than our test-time wait-k for en→zh, which is reasonable as both use a full-sentence model at the very core. However, on BLEU-vs-CW plots, their models have much worse CWs, which is also consistent with results in their paper (Gu, p.c.). This is because their R/W model prefers consecutive segments of READs and WRITEs (e.g., their model often produces R R R R R W W W W R R R W W W W R ...) while our wait-k translates concurrently with the input (the initial segment has length k, and all others have length 1, thus a much lower CW). We also found their training to be extremely brittle due to the use of RL whereas our work is very robust. 6.3 Human Evaluation on Anticipation Tab. 2 shows human evaluations on anticipation rates and accuracy on all four directions, using 100 examples in each language pair from the dev sets. As expected, we can see that, with increasing k, the anticipation rates decrease (at both sentence and word levels), and the anticipation accuracy improves. Moreover, the anticipation rates are very different among the four directions, with en→zh > de→en > zh→en > en→de Interestingly, this order is exactly the same with the order of the BLEU-score gaps between our wait-9 and full-sentence models: en→zh: 2.7 > de→en: 1.1 > zh→en: 1.6† > en→de: 0.3 3032 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 dochw¨ahrendmansichimkongress nichtauf ein vorgeheneinigen kann , wartenmehrere bs. nicht l¨anger but while they -self in congress not on one action agree can , wait several states no longer k=3 but , while congress has not agreed on a courseof action , severalstates no longer wait Figure 9: German-to-English example in the dev set with anticipation. The main verb in the embedded clause, “einigen” (agree), is correctly predicted 3 words ahead of time (with “sich” providing a strong hint), while the aux. verb “kann” (can) is predicted as “has”. The baseline translation is “but , while congressional action can not be agreed , several states are no longer waiting”. bs.: bunndesstaaten. 1 2 3 4 5 6 7 8 9 10 11 12 t¯a h´ai shu¯o xi`anz`ai zh`engz`ai w`ei zh`e y¯ı fˇangw`en zu`o ch¯u ¯anp´ai 他还说现在 正在 为这一 访问 作 出 安排 he also said now (prog.)⋄ for this one visit make out arrangement k=1 he also said that he is now making preparations for this visit k=3 he also said that he is making preparations for this visit k=∞ he also said that arrangements are being made for this visit Figure 10: Chinese-to-English example in the dev set with anticipation. Both wait-1 and wait-3 policies yield perfect translations, with “making preparations” predicted well ahead of time. ⋄: progressive aspect marker. 1 2 3 4 5 6 7 8 9 10 11 ji¯ang z´em´ın du`ı b`ush´ı zˇongtˇong l´ai hu´a fˇangw`en biˇaosh`ı r`eli`e hu¯any´ıng 江泽民对布什总统 来 华 访问 表示 热烈欢迎 jiang zeming to bush president come-to china visit express warm welcome k=3 jiang zemin expressed welcome to president bush ’s visit to china k=3† jiang zemin meets president bush in china ’s bid to visit china Figure 11: Chinese-to-English example from online news. Our wait-3 model correctly anticipates both “expressed” and “welcome” (though missing “warm”), and moves the PP (“to ... visit to china”) to the very end which is fluent in the English word order. †: test-time wait-k produces nonsense translation. 1 2 3 4 5 6 7 8 9 10 Mˇeigu´o d¯angj´u du`ı Sh¯at`e j`ızhˇe sh¯ız¯ong y¯ı `an gˇand`ao d¯any¯ou (a) 美国 当局 对沙特记者 失踪 一 案 感到 担忧 US authorities to Saudi reporter missing a case feel concern k=3 the us authorities are very concerned about the saudi reporter ’s missing case k=3† the us authorities have disappeared from saudi reporters b`umˇan (b) 美国 当局 对沙特记者 失踪 一 案 感到 不 不 不满 满 满 k=3 the us authorities are very concerned about the saudi reporter ’s missing case k=5 the us authorities have expressed dissatisfaction with the incident of saudi arabia ’s missing reporters Figure 12: (a) Chinese-to-English example from more recent news, clearly outside of our data. Both the verb “gˇand`ao” (“feel”) and the predicative “d¯any¯ou” (“concerned”) are correctly anticipated, probably hinted by “missing”. (b) If we change the latter to b`umˇan (“dissatisfied”), the wait-3 result remains the same (which is wrong) while wait-5 translates conservatively without anticipation. †: test-time wait-k produces nonsense translation. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 it was learned that this is the largest fire accident in the medical and health system nationwide since the founding of new china k=3 k=3† 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 j`u liˇaojiˇe , zh`e sh`ı zh¯onggu´o j`ın jˇı ni´an l´ai f¯ash¯eng de zu`ı d`a y¯ı qˇı y¯ıli´ao w`eish¯eng x`ıtˇong huˇoz¯ai sh`ıg`u 据 了解, 这是 中国 近 几 年 来 发生 的 最大一起 医疗 卫生系统火灾事故 to known , this is China recent few years since happen most big one case medical health system fire accident y¯ınw`ei t¯a sh`ı , zh`eg`e , sh`ı z`ui d`a de huˇoz¯ai sh`ıg`u , zh`e sh`ı x¯ın zh¯onggu´o ch´engl`ı yˇıl´ai 因为 它是, 这个 , 是 最 大 的 火灾 事故 , 这是新 中国 成立以来 because it is , this , is most big fire accident , this is new China funding since Figure 13: English-to-Chinese example in the dev set with incorrect anticipation due to mandatory long-distance reorderings. The English sentence-final clause “since the founding of new china” is incorrectly predicted in Chinese as “近几年来”(“in recent years”). Test-time wait-3 produces translation in the English word order, which sounds odd in Chinese, and misses two other quantifiers (“in the medical and health system” and “nationwide”), though without prediction errors. The full-sentence translation, “据了解,这是新中国成立以来,全国医 疗卫生系统发生的最大的一起火灾事故”, is perfect. 3033 (†: difference in 4-ref BLEUs, which in our experience reduces by about half in 1-ref BLEUs). We argue that this order roughly characterizes the relative difficulty of simultaneous translation in these directions. In our data, we found en→zh to be particularly difficult due to the mandatory long-distance reorderings of English sentencefinal temporal clauses (such as “in recent years”) to much earlier positions in Chinese; see Fig. 13 for an example. It is also well-known that de→en is more challenging in simultaneous translation than en→de since SOV→SVO involves prediction of the verb, while SVO→SOV generally does not need prediction in our wait-k with a reasonable k, because V is often shorter than O. For example, human evaluation found only 1.4%, 0.1%, and 0% word anticipations in en→de for k=3, 5 and 7, and 4.5%, 1.5%, and 0.6% for de→en. 6.4 Examples and Discussion We showcase some examples in de→en and zh→en from the dev sets and online news in Figs. 9 to 12. In all these examples except Fig. 12(b), our wait-k models can generally anticipate correctly, often producing translations as good as the full-sentence baseline. In Fig. 12(b), when we change the last word, the wait-3 translation remains unchanged (correct for (a) but wrong for (b)), but wait-5 is more conservative and produces the correct translation without anticipation. Fig. 13 demonstrates a major limitation of our fixed wait-k policies, that is, sometimes it is just impossible to predict correctly and you have to wait for more source words. In this example, due to the required long-distance reordering between English and Chinese (the sentence-final English clause has to be placed very early in Chinese), any wait-k model would not work, and a good policy should wait till the very end. 7 Related Work The work of Gu et al. (2017) is different from ours in four (4) key aspects: (a) by design, their model does not anticipate; (b) their model can not achieve any specified latency metric at test time while our wait-k model is guaranteed to have a k-word latency; (c) their model is a combination of two models, using a full-sentence base model to translate, thus a mismatch between training and testing, while our work is a genuine simultaneous model, and (d) their training is also two-staged, using RL to update the R/W model, while we train from scratch. In a parallel work, Press and Smith (2018) propose an “eager translation” model which also outputs target-side words before the whole input sentence is fed in, but there are several crucial differences: (a) their work still aims to translate full sentences using beam search, and is therefore, as the authors admit, “not a simultaneous translation model”; (b) their work does not anticipate future words; and (c) they use word alignments to learn the reordering and achieve it in decoding by emitting the ϵ token, while our work integrates reordering into a single wait-k prediction model that is agnostic of, yet capable of, reordering. In another recent work, Alinejad et al. (2018) adds a prediction action to the work of Gu et al. (2017). Unlike Grissom II et al. (2014) who predict the source verb which might come after several words, they instead predict the immediate next source words, which we argue is not as useful in SOV-to-SVO translation. 4 In any case, we are the first to predict directly on the target side, thus integrating anticipation in a single translation model. Jaitly et al. (2016) propose an online neural transducer for speech recognition that is conditioned on prefixes. This problem does not have reorderings and thus no anticipation is needed. 8 Conclusions We have presented a prefix-to-prefix training and decoding framework for simultaneous translation with integrated anticipation, and a wait-k policy that can achieve arbitrary word-level latency while maintaining high translation quality. This prefixto-prefix architecture has the potential to be used in other sequence tasks outside of MT that involve simultaneity or incrementality. We leave many open questions to future work, e.g., adaptive policy using a single model (Zheng et al., 2019). Acknowledgments We thank Colin Cherry (Google Montreal) for spotting a mistake in AL (Eq. 8), Hao Zhang (Google NYC) for comments, the bilingual speakers for human evaluations, and the anonymous reviewers for suggestions. 4 Their codebase on Github is not runnable, and their baseline is inconsistent with Gu et al. (2017) which we compared to, so we did not include their results for comparison. 3034 References Ashkan Alinejad, Maryam Siahbani, and Anoop Sarkar. 2018. Prediction improves simultaneous neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Srinivas Bangalore, Vivek Kumar Rangarajan Sridhar, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-tospeech translation of dialogs. In Proc. of NAACLHLT. Kyunghyun Cho and Masha Esipova. 2016. Can neural machine translation do simultaneous translation? volume abs/1606.02012. http://arxiv.org/abs/1606.02012. T Fujita, Graham Neubig, Sakriani Sakti, T Toda, and S Nakamura. 2013. Simple, lexicalized choice of translation timing for simultaneous speech translation. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH . Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum´e III. 2014. Don’t until the final verb wait: Reinforcement learning for simultaneous machine translation. In Proceedings of the 2014 Conference on empirical methods in natural language processing (EMNLP). pages 1342–1352. Alvin Grissom II, Naho Orita, and Jordan BoydGraber. 2016. Incremental prediction of sentencefinal verbs: Humans versus machines. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O. K. Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers. pages 1053–1062. https://aclanthology.info/papers/E171099/e17-1099. He He, Jordan Boyd-Graber, and Hal Daum´e III. 2016. Interpretese vs. translationese: The uniqueness of human strategies in simultaneous interpretation. In North American Association for Computational Linguistics. He He, Alvin Grissom II, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Syntax-based rewriting for simultaneous machine translation. In Empirical Methods in Natural Language Processing. Liang Huang, Kai Zhao, and Mingbo Ma. 2017. When to finish? optimal beam search for neural text generation (modulo beam size). In EMNLP. Navdeep Jaitly, David Sussillo, Quoc V Le, Oriol Vinyals, Ilya Sutskever, and Samy Bengio. 2016. An online sequence-to-sequence model using partial conditioning. In Advances in Neural Information Processing Systems. pages 5067–5075. G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. 2017. OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv e-prints . S Matsubarayx, K Iwashimaz, N Kawaguchizx, K Toyama, and Yasuyoshi Inagaki. 2000. Simultaneous japanese-english interpretation based on early prediction of english verb . Barbara Moser-Mercer, Alexander K¨unzli, and Marina Korac. 1998. Prolonged turns in interpreting: Effects on quality, physiological and psychological stress (pilot study). Interpreting 3(1):47–64. Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Optimizing segmentation strategies for simultaneous speech translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Syntax-based simultaneous translation through prediction of unseen syntactic constituents. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). volume 1, pages 198–207. Ofir Press and Noah A. Smith. 2018. You may not need attention. https://arxiv.org/abs/1810.13409. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30. Yilin Yang, Liang Huang, and Mingbo Ma. 2018. Breaking the beam search curse: A study of (re-) scoring methods and stopping criteria for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019. Simultaneous translation with flexible policy via restricted imitation learning. In ACL. 3035 Appendix A Supplemental Material: Model Refinement with Catchup Policy As mentioned in Sec. 3, the wait-k decoding is always k words behind the incoming source stream. In the ideal case where the input and output sentences have equal length, the translation will finish k steps after the source sentence finishes, i.e., the tail length is also k. This is consistent with human interpreters who start and stop a few seconds after the speaker starts and stops. However, input and output sentences generally have different lengths. In some extreme directions such as Chinese to English, the target side is significantly longer than the source side, with an average gold tgt/src ratio, r = |y⋆|/|x|, of around 1.25 (Huang et al., 2017; Yang et al., 2018). In this case, if we still follow the vanilla wait-k policy, the tail length will be 0.25|x| + k which increases with input length. For example, given a 20-word Chinese input sentence, the tail of wait3 policy will be 8 word long, almost half of the source length. This brings two negative effects: (a) as decoding progresses, the user will be effectively lagging behind further and further (becomes each Chinese word in principle translates to 1.25 English words), rendering the user more and more out of sync with the speaker; and (b) when a source sentence finishes, the rather long tail is displayed immediately, causing a cognitive burden on the user.5 These problems become worse with longer input sentences (see Fig. 14). To address this problem, we devise a “waitk+catchup” policy so that the user is still k word behind the input in terms of real information content, i.e., always k source words behind the ideal perfect synchronization policy denoted by the diagonal line in Fig. 14. For example, assume the tgt/src ratio is r = 1.25, we will output 5 target words for every 4 source words; i.e., the catchup frequency, denoted c = r −1, is 0.25. See Fig. 14. More formally, with catchup frequency c, the new policy is: gwait-k, c(t) = min{k + t −1 −⌊ct⌋, |x|} (13) and our decoding and training objectives change accordingly (again, we train the model to catchup using this new policy). 5 It is true that the tail can in principle be displayed concurrently with the first k words of the next input, but the tail is now much longer than k. Chinese→ English→ Chinese→ English→ Tail ⌧gc(|x|) <latexit sha1_base64="W3d+vXTtjFcVWDcKg5a0VxGNqk=">ACAXicbZDLSsNAFIYnXmu9Rd0IboJFqJuSiKDLohuXFewFmhAm0k7dDIJMydiSePGV3HjQhG3voU738Zpm4W2/jDw8Z9zmHP+IOFM gW1/G0vLK6tr6WN8ubW9s6ubfUnEqCW2SmMeyE2BFORO0CQw47S4ijgtB0Mryf19j2VisXiDkYJ9SLcFyxkBIO2fPQBZz6Wd8neXsRhgGQZg95ONT36zYNXsqaxGcAiqoUM3v9xeTNKICiAcK9V17AS8DEtghNO87KaKJpgMcZ92NQocUeVl0wty60Q7PSuMpX4CrKn7eyLDkVKjKNCdkx3VfG1i/lfrphBehkTSQpUkNlHYcotiK1JHFaPSUqAjzRgIpne1SIDLDEBHVpZh+DMn7wIrbOao/n2vFK/KuIoS N0jKrIQReojm5QAzURQY/oGb2iN+PJeDHejY9Z65JRzBygPzI+fwAWY5dI</latexit> <latexit sha1_base64="W3d+vXTtjFcVWDcKg5a0VxGNqk=">ACAXicbZDLSsNAFIYnXmu9Rd0IboJFqJuSiKDLohuXFewFmhAm0k7dDIJMydiSePGV3HjQhG3voU738Zpm4W2/jDw8Z9zmHP+IOFM gW1/G0vLK6tr6WN8ubW9s6ubfUnEqCW2SmMeyE2BFORO0CQw47S4ijgtB0Mryf19j2VisXiDkYJ9SLcFyxkBIO2fPQBZz6Wd8neXsRhgGQZg95ONT36zYNXsqaxGcAiqoUM3v9xeTNKICiAcK9V17AS8DEtghNO87KaKJpgMcZ92NQocUeVl0wty60Q7PSuMpX4CrKn7eyLDkVKjKNCdkx3VfG1i/lfrphBehkTSQpUkNlHYcotiK1JHFaPSUqAjzRgIpne1SIDLDEBHVpZh+DMn7wIrbOao/n2vFK/KuIoS N0jKrIQReojm5QAzURQY/oGb2iN+PJeDHejY9Z65JRzBygPzI+fwAWY5dI</latexit> <latexit sha1_base64="W3d+vXTtjFcVWDcKg5a0VxGNqk=">ACAXicbZDLSsNAFIYnXmu9Rd0IboJFqJuSiKDLohuXFewFmhAm0k7dDIJMydiSePGV3HjQhG3voU738Zpm4W2/jDw8Z9zmHP+IOFM gW1/G0vLK6tr6WN8ubW9s6ubfUnEqCW2SmMeyE2BFORO0CQw47S4ijgtB0Mryf19j2VisXiDkYJ9SLcFyxkBIO2fPQBZz6Wd8neXsRhgGQZg95ONT36zYNXsqaxGcAiqoUM3v9xeTNKICiAcK9V17AS8DEtghNO87KaKJpgMcZ92NQocUeVl0wty60Q7PSuMpX4CrKn7eyLDkVKjKNCdkx3VfG1i/lfrphBehkTSQpUkNlHYcotiK1JHFaPSUqAjzRgIpne1SIDLDEBHVpZh+DMn7wIrbOao/n2vFK/KuIoS N0jKrIQReojm5QAzURQY/oGb2iN+PJeDHejY9Z65JRzBygPzI+fwAWY5dI</latexit> <latexit sha1_base64="W3d+vXTtjFcVWDcKg5a0VxGNqk=">ACAXicbZDLSsNAFIYnXmu9Rd0IboJFqJuSiKDLohuXFewFmhAm0k7dDIJMydiSePGV3HjQhG3voU738Zpm4W2/jDw8Z9zmHP+IOFM gW1/G0vLK6tr6WN8ubW9s6ubfUnEqCW2SmMeyE2BFORO0CQw47S4ijgtB0Mryf19j2VisXiDkYJ9SLcFyxkBIO2fPQBZz6Wd8neXsRhgGQZg95ONT36zYNXsqaxGcAiqoUM3v9xeTNKICiAcK9V17AS8DEtghNO87KaKJpgMcZ92NQocUeVl0wty60Q7PSuMpX4CrKn7eyLDkVKjKNCdkx3VfG1i/lfrphBehkTSQpUkNlHYcotiK1JHFaPSUqAjzRgIpne1SIDLDEBHVpZh+DMn7wIrbOao/n2vFK/KuIoS N0jKrIQReojm5QAzURQY/oGb2iN+PJeDHejY9Z65JRzBygPzI+fwAWY5dI</latexit> ⌧g(|x|) <latexit sha1_base64="djgBsZyRNW/m7sXKhnwX24/Vbo=">AB/3icbVDLSsNAFJ3UV62vqODGzWAR6qYkIuiy6MZlBfuANoTJdNIOnTyYuRFLmoW/4saFIm79DXf+jZM2C209MHA4517umePFgiuw rG+jtLK6tr5R3qxsbe/s7pn7B20VJZKyFo1EJLseUzwkLWAg2DdWDISeIJ1vPFN7ncemFQ8Cu9hEjMnIMOQ+5wS0JrHvWBJG46zGrTfkBg5PnpYzY9c82qVbdmwMvELkgVFWi65ld/ENEkYCFQZTq2VYMTkokcCpYVuknisWEjsmQ9TQNScCUk87yZ/hUKwPsR1K/EPBM/b2RkCpSeDpyTyjWvRy8T+vl4B/5aQ8jBNgIZ0f8hOBIcJ5GXjAJaMgJpoQKrnOiumISEJBV1bRJdiLX14m7fO6rfndRbVxXdRsfoBN WQjS5RA92iJmohiqboGb2iN+PJeDHejY/5aMkodg7RHxifP5RWlnI=</latexit> <latexit sha1_base64="djgBsZyRNW/m7sXKhnwX24/Vbo=">AB/3icbVDLSsNAFJ3UV62vqODGzWAR6qYkIuiy6MZlBfuANoTJdNIOnTyYuRFLmoW/4saFIm79DXf+jZM2C209MHA4517umePFgiuw rG+jtLK6tr5R3qxsbe/s7pn7B20VJZKyFo1EJLseUzwkLWAg2DdWDISeIJ1vPFN7ncemFQ8Cu9hEjMnIMOQ+5wS0JrHvWBJG46zGrTfkBg5PnpYzY9c82qVbdmwMvELkgVFWi65ld/ENEkYCFQZTq2VYMTkokcCpYVuknisWEjsmQ9TQNScCUk87yZ/hUKwPsR1K/EPBM/b2RkCpSeDpyTyjWvRy8T+vl4B/5aQ8jBNgIZ0f8hOBIcJ5GXjAJaMgJpoQKrnOiumISEJBV1bRJdiLX14m7fO6rfndRbVxXdRsfoBN WQjS5RA92iJmohiqboGb2iN+PJeDHejY/5aMkodg7RHxifP5RWlnI=</latexit> <latexit sha1_base64="djgBsZyRNW/m7sXKhnwX24/Vbo=">AB/3icbVDLSsNAFJ3UV62vqODGzWAR6qYkIuiy6MZlBfuANoTJdNIOnTyYuRFLmoW/4saFIm79DXf+jZM2C209MHA4517umePFgiuw rG+jtLK6tr5R3qxsbe/s7pn7B20VJZKyFo1EJLseUzwkLWAg2DdWDISeIJ1vPFN7ncemFQ8Cu9hEjMnIMOQ+5wS0JrHvWBJG46zGrTfkBg5PnpYzY9c82qVbdmwMvELkgVFWi65ld/ENEkYCFQZTq2VYMTkokcCpYVuknisWEjsmQ9TQNScCUk87yZ/hUKwPsR1K/EPBM/b2RkCpSeDpyTyjWvRy8T+vl4B/5aQ8jBNgIZ0f8hOBIcJ5GXjAJaMgJpoQKrnOiumISEJBV1bRJdiLX14m7fO6rfndRbVxXdRsfoBN WQjS5RA92iJmohiqboGb2iN+PJeDHejY/5aMkodg7RHxifP5RWlnI=</latexit> <latexit sha1_base64="djgBsZyRNW/m7sXKhnwX24/Vbo=">AB/3icbVDLSsNAFJ3UV62vqODGzWAR6qYkIuiy6MZlBfuANoTJdNIOnTyYuRFLmoW/4saFIm79DXf+jZM2C209MHA4517umePFgiuw rG+jtLK6tr5R3qxsbe/s7pn7B20VJZKyFo1EJLseUzwkLWAg2DdWDISeIJ1vPFN7ncemFQ8Cu9hEjMnIMOQ+5wS0JrHvWBJG46zGrTfkBg5PnpYzY9c82qVbdmwMvELkgVFWi65ld/ENEkYCFQZTq2VYMTkokcCpYVuknisWEjsmQ9TQNScCUk87yZ/hUKwPsR1K/EPBM/b2RkCpSeDpyTyjWvRy8T+vl4B/5aQ8jBNgIZ0f8hOBIcJ5GXjAJaMgJpoQKrnOiumISEJBV1bRJdiLX14m7fO6rfndRbVxXdRsfoBN WQjS5RA92iJmohiqboGb2iN+PJeDHejY/5aMkodg7RHxifP5RWlnI=</latexit> Tail Figure 14: Left (wait-2): it renders the user increasingly out of sync with the speaker (the diagonal line denotes the ideal perfect synchronization). Right (+catchup): it shrinks the tail and is closer to the ideal diagonal, reducing the effective latency. Black and red arrows illustrate 2 and 4 words lagging behind the diagonal, resp. On the other hand, when translating from longer source sentences to shorter targets, e.g., from English to Chinese, it is very possible that the decoder finishes generation before the encoder sees the entire source sentence, ignoring the “tail” on the source side. Therefore, we need “reverse” catchup, i.e., catching up on encoder instead of decoder. For example, in English-to-Chinese translation, we encode one extra word every 4 steps, i.e., encoding 5 English words per 4 Chinese words. In this case, the “decoding” catcup frequency c = r −1 = −0.2 is negative but Eq. 13 still holds. Note that it works for any arbitrary c, such as 0.341, where the catchup pattern is not as easy as “1 in every 4 steps”, but still maintains a rough frequency of c catchups per source word. Fig. 15 shows the comparison between wait-k model and catchup policy which enables one extra word decoding on every 4th step. For example, for wait-3 policy with catchup, the policy is R R (R W R W R W R W W)+ W+. 1 3 5 7 9 11 Average Lagging (zh en) 30 35 40 45 4-ref BLEU k=1 k=1 k=3 k=3 k=5 k=5 k=7 k=7 k=9 k=9 Transformer +decoder catchup 33 34 Figure 15: BLEU scores and AL comparisons with different wait-k models on Chinese-to-English on dev set. □and ◦are decoded with tail beam search. ⋆$ and ⋆$ are greedy decoding and beam-search baselines. 3036 B Supplemental Material: Evaluations with AP We also evaluate our work using Average Proportion (AP) on both de↔en and zh↔en translation comparing with full sentence translation and Gu et al. (2017). 0.6 0.7 0.8 Average Proportion (de en) 15 20 25 30 1-ref BLEU k=1 k=1 k=3 k=3 k=5 k=5 k=7 k=7 k=9 k=9 1 wait-k test-time wait-k 0.6 0.7 0.8 Average Proportion (en de) 10 15 20 25 1-ref BLEU k=1 k=1 k=3 k=3 k=5 k=5 k=7 k=7 k=9 k=9 1 wait-k test-time wait-k Figure 16: Translation quality against AP on de↔en simultaneous translation, showing wait-k models (for k=1, 3, 5, 7, 9), test-time wait-k results, full-sentence baselines, and our reimplementation of Gu et al. (2017), all based on the same Transformer. ⋆$:fullsentence (greedy and beam-search), Gu et al. (2017): ▶:CW=2; ▼:CW=5; ■:CW=8. 0.5 0.6 0.7 0.8 Average Proportion (zh en) 15 20 25 30 35 40 4-ref BLEU k=1 k=3 k=5 k=7 k=9 k=1 k=3 k=5 k=7 k=9 wait-k test-time wait-k 1 0.6 0.7 0.8 Average Proportion (en zh) 7.5 10.0 12.5 15.0 17.5 20.0 22.5 1-ref BLEU k=1 k=3 k=5 k=7 k=9 k=1 k=3 k=5 k=7 k=9 wait-k test-time wait-k 1 Figure 17: Translation quality against AP on zh↔en simultaneous translation, showing wait-k models (for k=1, 3, 5, 7, 9), test-time wait-k results, full-sentence baselines, and our reimplementation of Gu et al. (2017), all based on the same Transformer. ⋆$:fullsentence (greedy and beam-search), Gu et al. (2017): ▶:CW=2; ▼:CW=5; ■:CW=8.
2019
289
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 303–315 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 303 Self-Regulated Interactive Sequence-to-Sequence Learning Julia Kreutzer Computational Linguistics Heidelberg University Germany [email protected] Stefan Riezler Computational Linguistics & IWR Heidelberg University Germany [email protected] Abstract Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning. We show how self-regulation strategies that decide when to ask for which kind of feedback from a teacher (or from oneself) can be cast as a learning-to-learn problem leading to improved cost-aware sequence-to-sequence learning. In experiments on interactive neural machine translation, we find that the selfregulator discovers an ϵ-greedy strategy for the optimal cost-quality trade-off by mixing different feedback types including corrections, error markups, and self-supervision. Furthermore, we demonstrate its robustness under domain shift and identify it as a promising alternative to active learning. 1 Introduction The concept of self-regulation has been studied in educational research (Hattie and Timperley, 2007; Hattie and Donoghue, 2016), psychology (Zimmerman and Schunk, 1989; Panadero, 2017), and psychiatry (Nigg, 2017), and was identified as central to successful learning. “Self-regulated students” can be characterized as “becoming like teachers”, in that they have a repertoire of strategies to self-assess and self-manage their learning process, and they know when to seek help and which kind of help to seek. While there is a vast literature on machine learning approaches to metalearning (Schmidhuber et al., 1996), learning-tolearn (Thrun and Pratt, 1998), or never-ending learning (Mitchell et al., 2015), the aspect of learning when to ask for which kind of feedback has so far been neglected in this field. We propose a machine learning algorithm that uses self-regulation in order to balance the cost and effect of learning from different types of feedback. This is particularly relevant for human-inSeq2Seq Figure 1: Human-in-the-loop self-regulated learning. the-loop machine learning, where human supervision is costly. The self-regulation module automatically learns which kind of feedback to apply when in training—full supervision by teacher demonstration or correction, weak supervision in the form of positive or negative rewards for student predictions, or a self-supervision signal generated by the student. Figure 1 illustrates this learning scenario. The learner, in our case a sequence-tosequence (Seq2Seq) learner, aims to solve a certain task with the help of a human teacher. For every input it receives for training, it can ask the teacher for feedback to its own output, or supervise itself by training on its own output, or skip learning on the input example altogether. The self-regulator’s policy for choosing feedback types is guided by their cost and by the performance gain achieved by learning from a particular type of feedback. We apply the self-regulation algorithm to interactive machine translation where a neural machine translation (NMT) system functions as a student which receives feedback simulated from a human reference translation or supervises itself. The intended real-world application is a machine translation personalization scenario where the goal of the human translator is to teach the NMT system 304 to adapt to in-domain data with the best trade-off between feedback cost and performance gain. It can be transferred to other sequence-to-sequence learning tasks such as personalization of conversational AI systems for question-answering or geographical navigation. Our analysis of different configurations of selfregulation yields the following insights: Perhaps unsurprisingly, the self-regulator learns to balance all types of feedback instead of relying only on the strongest or cheapest option. This is an advantage over active learning strategies that only consider the choice between no supervision and full supervision. Interestingly, though, we find that the selfregulator learns to trade off exploration and exploitation similar to a context-free ϵ-greedy strategy that optimizes ϵ for fastest learning progress. Lastly, we show that the learned regulator is robust in a cold-start transfer to new domains, and even shows improvements over fully supervised learning on domains such as literary books where reference translations provide less effective learning signals. 2 Related Work The incorporation of a query’s cost into reinforcement learning has been addressed, for example, in the framework of active reinforcement learning (Krueger et al., 2016). The central question in active reinforcement learning is to quantify the long-term value of reward information, however, assuming a fixed cost for each action and every round. Our framework is considerably more complicated by the changing costs for each feedback type on each round. A similar motivation for the need of changing feedback in reinforcement learning with human feedback is given in MacGlashan et al. (2017). The goal of that work is to operationalize feedback schemes such as diminishing returns, differential feedback, or policy shaping. Human reinforcement learning with corrective feedback that can decrease or increase the action magnitude has been introduced in Celemin et al. (2019). However, none of these works are concerned with the costs that are incurred when eliciting rewards from humans, nor do they consider multiple feedback modes. Our work is connected to active learning, for example, to approaches that use reinforcement learning to learn a policy for a dynamic active learning strategy (Fang et al., 2017), or to learn a curriculum to order noisy examples (Kumar et al., 2019), or to the approach of Liu et al. (2018) who use imitation learning to select batches of data to be labeled. However, the action space these approaches consider is restricted to the decision whether or not to select particular data and is designed for a fixed budget, neither do they incorporate feedback cost in their frameworks. As we will show, our self-regulation strategy outperforms active learning based on uncertainty sampling (Settles and Craven, 2008; Peris and Casacuberta, 2018) and our reinforcement learner is rewarded in such a way that it will produce the best system as early as possible. Research that addresses the choice and the combination of different types of feedback is situated in the area between reinforcement and imitation learning (Ranzato et al., 2016; Cheng et al., 2018). Instead of learning how to mix different supervision signals, these approaches assume fixed schedules. Further connections between our work on learning with multiple feedback types can be drawn to various extensions of reinforcement learning by multiple tasks (Jaderberg et al., 2017), multiple loss functions (Wun et al., 2018), or multiple policies (Smith et al., 2018). Feedback in the form of corrections (Turchi et al., 2017), error markings (Domingo et al., 2017), or translation quality judgments (Lam et al., 2018) has been successfully integrated in simulation experiments into interactive-predictive machine translation. Again, these works do not consider automatic learning of a policy for the optimal choice of feedback. 3 Self-Regulated Interactive Learning In this work, we focus on the aspect of selfregulated learning that concerns the ability to decide which type of feedback to query from a teacher (or oneself) for most efficient learning depending on the context. In our human-in-the-loop machine learning formulation, we focus on two contextual aspects that can be measured precisely: quality and cost. The self-regulation task is to optimally balance human effort and output quality. We model self-regulation as an active reinforcement learning problem with dynamic costs, where in each state, i.e. upon receiving an input, the regulator has to choose an action, here a feedback 305 type, and pay a cost. The learner receives feedback of that type from the human to improve its prediction. Based on the effect of this learning update, the regulator’s actions are reinforced or penalized, so that it improves its choice for future inputs. In the following, we first compare training objectives for a Seq2Seq learner from various types of feedback (§3.1), then introduce the selfregulator module (§3.2), and finally combine both in the self-regulation algorithm (§3.3). 3.1 Seq2Seq Learning Let x = x1 . . . xS be a sequence of indices over a source vocabulary VSRC, and y = y1 . . . yT a sequence of indices over a target vocabulary VTRG. The goal of sequence-to-sequence learning is to learn a function for mapping an input sequence x into an output sequences y. Specifically, for the example of machine translation, where y is a translation of x, the model, parametrized by a set of weights θ, learns to maximize pθ(y | x). This quantity is further factorized into conditional probabilities over single tokens: pθ(y | x) = T Y t=1 pθ(yt | x; y<t). The distribution pθ(yt | x; y<t) is defined by the neural model’s softmax-normalized output vector: pθ(yt | x; y<t) = softmax(NNθ(x; y<t)). There are various options for building the architecture of the neural model NNθ, such as recurrent (Sutskever et al., 2014), convolutional (Gehring et al., 2017) or attentional (Vaswani et al., 2017) encoder-decoder architectures (or a mix thereof (Chen et al., 2018)). Regardless of their architecture, there are multiple ways of interactive learning that can be applied to neural Seq2Seq learners. Learning from Corrections (FULL). Under full supervision, i.e., when the learner receives a fully corrected output y∗for an input x, crossentropy minimization (equivalent to maximizing the likelihood of the data D under the current model) considers the following objective: J FULL(θ) = 1 |D| X (x,y∗)∈D −log pθ(y∗| x). The stochastic gradient of this objective is gFULL θ (x, y∗) = −∇θ log pθ(y∗| x), constituting an unbiased estimate of the gradient ∇θJ FULL =E(x,y∗)∼D  gFULL θ (x, y∗)  . A local minimum can be found by performing stochastic gradient descent on gFULL θ (x, y∗). This training objective is the standard in supervised learning when training with human-generated references or for online adaptation to post-edits (Turchi et al., 2017). Learning from Error Markings (WEAK). Petrushkov et al. (2018) presented chunk-based binary feedback as a low-cost alternative to full corrections. In this scenario the human teacher marks the correct parts of the machine-generated output ˆy. As a consequence every token in the output receives a reward δt, either δt = 1 if marked as correct, or δt = 0 otherwise. The objective of the learner is to maximize the likelihood of the correct parts of the output, or equivalently, to minimize J WEAK(θ) = 1 |D| P (x,ˆy)∈D PT t=1 −δt log pθ(ˆyt | x; ˆy<t) where the stochastic gradient is gWEAK θ (x, ˆy) = − T X t=1 δt · ∇θ log pθ(ˆyt | x; y<t) ∇θJ WEAK = E(x,ˆy)∼D  gWEAK θ (x, ˆy)  . The tokens ˆyt that receive δt = 1 are part of the correct output y∗, so the model receives a hint how a corrected output should look like. Although the likelihood of the incorrect parts of the sequence does not weigh into the sum, they are contained in the context of the correct parts (in y<t). Self-Supervision (SELF). Instead of querying the teacher for feedback, the learner can also choose to learn from its own output, that is, to learn from self-supervision. The simplest option is to treat the learner’s output as if it was correct, but that quickly leads to overconfidence and degeneration. Clark et al. (2018) proposed a cross-view training method: the learner’s original prediction is used as a target for a weaker model that shares parameters with the original model. We adopt this strategy by first producing a target sequence ˆy with beam search and then weaken the decoder through attention dropout with probability patt. The objective is to minimize the negative likelihood of the original target under the weakened model J SELF(θ) = 1 |D| X (x,ˆy)∈D −log ppatt θ (ˆy | x), 306 where the stochastic gradient is gSELF θ (x, ˆy) = −∇θ log ppatt θ (ˆy | x) ∇θJ SELF = E(x,ˆy)∼D  gSELF θ (x, ˆy)  . Combination. For self-regulated learning, we also consider a fourth option (NONE): the option to ignore the current input. Figure 2 summarizes the stochastic gradients for all cases. gs θ(x, y) = − T X t=1 ft · ∇θ log pdrop θ (yt | xt; y<t), with y = ( y∗ if s = FULL ˆy otherwise, drop = ( patt if s = SELF 0 otherwise, and ft =      1 if s ∈{FULL, SELF} δt if s = WEAK 0 if s = NONE Figure 2: Stochastic gradients for the Seq2Seq learner in dependence of feedback type s. In practice, Seq2Seq learning shows greater stability for mini-batch updates than online updates on single training samples. Mini-batch self-regulated learning can be achieved by accumulating stochastic gradients for a mini-batch of size B before updating θ with an average of these stochastic gradients, which we denote as g s[1:B] θ (x[1:B], y[1:B]) = 1 B PB i=1 gsi θ (xi, yi). 3.2 Learning to Self-Regulate The regulator is another neural model qφ that is optimized for the quality-cost trade-off of the Seq2Seq learner. Given an input xi and the Seq2Seq’s hypothesis ˆyi, it chooses an action, here a supervision mode si ∼qφ(s | xi, ˆyi). This choice of feedback determines the update of the Seq2Seq learner (Figure 2). The regulator is rewarded by the ratio between the cost ci of obtaining the feedback si and the quality improvement ∆(θi, θi−1) caused by updating the Seq2Seq learner with the feedback: r(si, xi, θi) = ∆(θi, θi−1) ci + α . (1) ∆(θi, θi−1) is measured as the difference in validation score achieved before and after the learner’s update (Fang et al., 2017), and ci as the cost of user edits. Adding a small constant cost α to the actual feedback cost ensures numerical stability. This meta-parameter can be interpreted as representing a basic cost for model updates of any kind. The objective for the regulator is to maximize the expected reward defined in Eq. 1: J META(φ) = Ex∼p(x),s∼qφ(s|x,ˆy) [r(s, x, θ)] . The full gradient of this objective is estimated by the stochastic gradient for sampled actions (Williams, 1992): gMETA φ (x, ˆy, s) = r · ∇φ log qφ(s | x, ˆy). (2) Note that the reward contains the immediate improvement after one update of the Seq2Seq learner and not the overall performance in hindsight. This is an important distinction to classic expected reward objectives in reinforcement learning since it biases the regulator towards actions that have an immediate effect, which is desirable in the case of interaction with a human. However, since Seq2Seq learning requires updates and evaluations based on mini-batches, the regulator update also needs to be based on mini-batches of predictions, leading to the following specification of Eq. (2) for a mini-batch j: gMETA φ (x[1:B], ˆy[1:B], s[1:B]) (3) = 1 B B X i=1 gMETA φ (xi, ˆyi, si) = ∆(θj, θj−1) 1 B B X i=1 ∇φ log qφ(si | xi, ˆyi) ci + α . While mini-batch updates are required for stable Seq2Seq learning, they hinder the regulator from assigning credit for model improvement to individual elements within the mini-batch. 3.3 Algorithm Algorithm 1 presents the proposed online learning algorithm with model updates cumulated over mini-batches. On arrival of a new input, the regulator predicts a feedback type in line 6. According to this prediction, the environment/user is asked for feedback for the Seq2Seq’s prediction at cost ci (line 7). The Seq2Seq model is updated on the 307 Algorithm 1 Self-Regulated Interactive Seq2Seq Input: Initial Seq2Seq θ0, regulator φ0, B 1: j ←0 2: while inputs and human available do 3: j ←j + 1 4: for i ←1 to B do 5: Observe input xi, Seq2Seq output ˆyi 6: Choose feedback: si ∼qφ(s | xi, ˆyi) 7: Obtain feedback fi of type si at cost ci 8: Update θ with g s[1:B] θ (x[1:B], ˆy[1:B]) 9: Measure improvement ∆(θj, θj−1) 10: Update φ with gMETA φ (x[1:B], ˆy[1:B], s[1:B]) basis of the feedback and mini-batch of stochastic gradients computed as summarized in Figure 2. In order to reinforce the regulator, the Seq2Seq model’s improvement (line 9) is assessed, and the parameters of the regulator are updated (line 10, Eq. 3). Training ends when the data stream or the provision of feedback ends. The intermediate Seq2Seq evaluations can be re-used for model selection (early stopping). In practice, these evaluations can either be performed by validation on a held-out set (as in the simulation experiments below) or by human assessment. Practical Considerations. The algorithm does not introduce any additional hyperparameters beyond standard learning rates, architecture design and mini-batch sizes that have to be tuned. As proposed in Petrushkov et al. (2018) or Clark et al. (2018), targets ˆy are pre-generated offline with the initial θ0, which we found crucial for the stability of the learning process. The evaluation step after the Seq2Seq update is an overhead that comes with meta-learning, incurring costs depending on the decoding algorithm and the evaluation strategy. However, Seq2Seq updates can be performed in mini-batches, and the improvement is assessed after a mini-batch of updates, as discussed above. 4 Experiments The main research questions to be answered in our experiments are: 1. Which strategies does the regulator develop? 2. How well does a trained regulator transfer across domains? 3. How do these strategies compare against (active) learning from a single feedback type? We perform experiments for interactive NMT, where a general-domain NMT model is adapted to a specific domain by learning from the feedback of a human translator. This is a realistic interactive learning scenario where cost-free pre-training on a general domain data is possible, but each feedback generated by the human translator in the personalization step incurs a specific cost. In our experiment, we use human-generated reference translations to simulate both the cost of human feedback and to measure the performance gain achieved by model updates. 4.1 Experimental Setup Seq2Seq Architecture. Both the Seq2Seq learner and the regulator are based on LSTMs (Hochreiter and Schmidhuber, 1997). The Seq2Seq has four bi-directional encoder and four decoder layers with 1024 units each, embedding layers of size 512. It uses Luong et al. (2015)’s input feeding and output layer, and global attention with a single feed forward layer (Bahdanau et al., 2015). Regulator Architecture. The regulator consists of LSTMs on two levels: Inspired by Siamese Networks (Bromley et al., 1994), a bi-directional LSTM encoder of size 512 separately reads in both the current input sequence and the beam search hypothesis generated by the Seq2Seq. The last state of encoded source and hypothesis sequence and the previous output distribution are concatenated to form the input to a higher-level regulator LSTM of size 256. This LSTM updates its internal state and predicts a score for every feedback type for every input in the mini-batch. The feedback for each input is chosen by sampling from the distribution obtained by softmax normalization of these scores. The embeddings of the regulator are initialized by the Seq2Seq’s source embeddings and further tuned during training. The model is implemented in the JoeyNMT1 framework based on PyTorch.2 Data. We use three parallel corpora for Germanto-English translation: a general-domain data set from the WMT2017 translation shared task for Seq2Seq pre-training, TED talks from the IWSLT2017 evaluation campaign for training the regulator with simulated feedback, and the Books 1https://github.com/joeynmt/joeynmt 2Code: https://github.com/juliakreutzer/ joeynmt/tree/acl19 308 corpus from the OPUS collection (Tiedemann, 2012) for testing the regulator on another domain. Data pre-processing details and splits are given in §A.1. The joint vocabulary for Seq2Seq and the regulator consists of 32k BPE sub-words (Sennrich et al., 2016) trained on WMT. Training. The Seq2Seq model is first trained on WMT with Adam (Kingma and Ba, 2015) on mini-batches of size 64, an initial learning rate 1 × 10−4 that is halved when the loss does not decrease for three validation rounds. Training ends when the validation score does not increase any further (scoring 29.08 BLEU on the WMT test). This model is then adapted to IWSLT with selfregulated training for one epoch, with online human feedback simulated from reference translations. The mini-batch size is reduced to 32 for self-regulated training to reduce the credit assignment problem for the regulator. The constant cost α (Eq. 1) is set to 1.3 When multiple runs are reported, the same set of random seeds is used for all models to control the order of the input data. The best run is evaluated on the Books domain for testing the generalization of the regulation strategies. Simulation of Cost and Performance. In our experiments, human feedback and its cost, and the performance gain achieved by model updates, is simulated by using human reference translations. Inspired by the keystroke mouse-action ratio (KSMR) (Barrachina et al., 2009), a common metric for measuring human effort in interactive machine translation, we define feedback cost as the sum of costs incurred by character edits and clicks, similar to Peris and Casacuberta (2018). The cost of a full correction (FULL) is the number of character edits between model output and reference, simulating the cost of a human typing.4 Error markings (WEAK) are simulated by comparing the hypothesis to the reference and marking the longest common sub-strings as correct, as proposed by Petrushkov et al. (2018). As an extension to Petrushkov et al. (2018) we mark multiple common sub-strings as correct if all of them have the longest length. The cost is defined as the number of marked words, assuming an interface that allows markings by clicking on words. For selftraining (SELF) and skipping training instances we naively assume zero cost, thus limiting the mea3Values ̸= 1 distort the rewards for self-training too much. 4As computed by the Python library difflib. surement of cost to the effort of the human teacher, and neglecting the effort on the learner’s side. Table 1 illustrates the costs per feedback type on a randomly selected set of examples. We measure the model improvement by evaluating the held-out set translation quality of the learned model at various time steps with corpus BLEU (cased SacreBLEU (Post, 2018)) and measure the accumulated costs. The best model is considered the one that delivers the highest quality at the lowest cost. This trade-off is important to bear in mind since it differs from the standard evaluation of machine translation models, where the overall best-scoring model, regardless of the supervision cost, is considered best. Finally, we evaluate the strategy learned by the regulator on an unseen domain, where the regulator decides which type of feedback the learner gets, but is not updated itself. 4.2 Results We compare learning from one type of feedback in isolation against regulators with the following set of actions: 1. Reg2: FULL, WEAK 2. Reg3: FULL, WEAK, SELF 3. Reg4: FULL, WEAK, SELF, NONE Cost vs. Quality. Figure 3 compares the improvement in corpus BLEU (Papineni et al., 2002) (corresponding to results in Translation Error Rate (TER, computed by pyTER) (Snover et al., 2006)) of regulation variants and full feedback over cumulative costs of up to 80k character edits. Using only full feedback (blue) as in standard supervised learning or learning from post-edits, the overall highest improvement can be reached (visible only after the cutoff of 80k edits; see Appendix A.2 for the comparison over a wider window of time). However, it comes at a very high cost (417k characters in total to reach +0.6 BLEU). The regulated variants offer a much cheaper improvement, at least until a cumulative cost between 80k (Reg4) and 120k (Reg2), depending on the feedback options available. The regulators do not reach the quality of the full model since their choice of feedback is oriented towards costs and immediate improvements. By finding a trade-off between feedback types for immediate improvements, the regulators sacrifice long-term improvement. Comparing regulators, Reg2 (orange) reaches the overall 309 SELF 0 x Sie greift in ihre Geldb¨orse und gibt ihm einen Zwanziger . ˆy It attacks their wallets and gives him a twist . y∗ She reaches into her purse and hands him a 20 . WEAK 9 x Und als ihr Vater sie sah und sah , wer sie geworden ist , in ihrem vollen M¨adchen-Sein , schlang er seine Arme um sie und brach in Tr¨anen aus . ˆy And when her father saw them and saw who became them , in their full girl ’s , he swallowed his arms around them and broke out in tears . y∗ When her father saw her and saw who she had become , in her full girl self , he threw his arms around her and broke down crying . FULL 59 x Und durch diese zwei Eigenschaften war es mir m¨oglich , die Bilder zu erschaffen , die Sie jetzt sehen . ˆy And through these two features , I was able to create the images you now see . y∗ And it was with those two properties that I was able to create the images that you ’re seeing right now . Table 1: Examples from the IWSLT17 training set, cost (2nd column) and feedback decisions made by Reg3. For weak feedback, marked parts are underlined, for full feedback, the corrections are marked by underlining the parts of the reference that got inserted and the parts of the hypothesis that got deleted. 0 10000 20000 30000 40000 50000 60000 70000 80000 Cumulative Cost 28.3 28.4 28.5 28.6 28.7 28.8 28.9 29.0 BLEU type full full/weak full/weak/self full/weak/self/none Figure 3: BLEU of regulation variants over cumulative costs. BLEU is computed on the tokenized IWSLT validation set with greedy decoding. highest improvement over the baseline model, but until the cumulative cost of around 35k character edits, Reg3 (green) offers faster improvement at a lower cost since it has an additional, cheaper feedback option. Adding the option to skip examples (Reg4, red) does not give a benefit. Appendix A.3 lists detailed results for offline evaluation on the trained Seq2Seq models on the IWSLT test set: Self-regulating models achieve improvements of 0.4-0.5 BLEU with costs reduced up to a factor of 23 in comparison to the full feedback model. The reduction in cost is enabled by the use of cheaper feedback, here markings and selfsupervision, which in isolation are very successful as well. Self-supervision works surprisingly well and can be recommended for cheap but effective unsupervised domain adaptation for sequence-tosequence learning. Self-Regulation Strategies. Figure 4 shows which actions Reg3 chooses over time when trained on IWSLT. Most often it chooses to do self-training on the current input. The choice of feedback within one batch varies only slightly dur200 300 400 500 600 700 800 Iterations +1.001e6 28.3 28.4 28.5 28.6 BLEU full/weak/self 200 300 400 500 600 700 800 +1.001e6 0 20 40 60 80 100 % of feedback self weak full Figure 4: Reg3 actions as chosen over time, depicted for each iteration. Counting of iterations starts at the previous iteration count of the baseline model. ing training, with the exception of an initial exploration phase within the first 100 iterations. In general, we observe that all regulators are highly sensitive to balancing cost and performance, and mostly prefer the cheapest option (e.g., Reg4 by choosing mostly NONE) since they are penalized heavily for choosing (or exploring) expensive options (see Eq. 1). A further research question is whether and how the self-regulation module takes the input or output context into account. We therefore compare its decisions to a context-free ϵ-greedy strategy. The ϵ-greedy algorithm is a successful algorithm for multi-armed bandits (Watkins, 1989). In our case, the arms are the four feedback types. They are chosen based on their reward statistics, here the average empirical reward per feedback type Qi(s) = 1 Ni(s) P 0,...,i r(si). With probability 1 −ϵ, the algorithm selects the feedback type with the highest empirical reward (exploitation), otherwise picks one of the remaining arms at random (exploration). In contrast to the neural regulator model, ϵ-greedy decides solely on the basis 310 0 5000 10000 15000 20000 25000 30000 35000 40000 Cumulative Cost 28.3 28.4 28.5 28.6 28.7 28.8 BLEU type full/weak/self eps0.1 eps0.25 eps0.5 eps0.75 eps0.9 Figure 5: BLEU and cumulative costs on IWSLT for Reg3 and ϵ-greedy with ϵ ∈[0.1, 0.25, 0.5, 0.75, 0.9]. of the reward statistics and has no internal contextual state representation. The comparison of Reg3 with ϵ-greedy for a range of values for ϵ in Figure 5 shows that learned regulator behaves indeed very similar to an ϵ-greedy strategy with ϵ = 0.25. ϵ-greedy variants with higher amounts of exploration show a slower increase in BLEU, while those with more exploitation show an initial steep increase that flattens out, leading to overall lower BLEU scores. The regulator has hence found the best trade-off, which is an advantage over the ϵ-greedy algorithm where the ϵ hyperparameter requires dedicated tuning. Considering the ϵ-greedy-like strategy of the regulator and the strong role of the cost factor shown in Figure 4, the regulator module does not appear to choose individual actions based e.g., on the difficulty of inputs, but rather composes mini-batches with a feedback ratio according to the feedback type’s statistics. This confirms the observations of Peris and Casacuberta (2018), who find that the subset of instances selected for labeling is secondary— it is rather the mixing ratio of feedback types that matters. This finding is also consistent with the mini-batch update regime that forces the regulator to take a higher-level perspective and optimize the expected improvement at the granularity of (minibatch) updates rather than at the input level. Domain Transfer. After training on IWSLT, we evaluate the regulators on the Books domain: Can they choose the best actions for an efficient learning progress without receiving feedback on the new domain? We evaluate the best run of each regulator type (i.e., φ trained on IWSLT), with the Seq2Seq model reset to the WMT baseline. 0.0 0.2 0.4 0.6 0.8 1.0 Cumulative Cost 1e8 14.0 14.2 14.4 14.6 14.8 BLEU type full weak full/weak full/weak/self full/weak/self/none Figure 6: Domain transfer of regulators trained on IWSLT to the Books domain in comparison to full and weak feedback only. The regulator is not further adapted to the Books domain, but decides on the feedback types for training the Seq2Seq model for a single epoch on the Books data. Figure 6 visualizes the regulated training process of the Seq2Seq model. As before, Reg3 performs best, outperforming weak, full and self-supervision (reaching 14.75 BLEU, not depicted since zero cost). Learning from full feedback improves much later in training and reaches 14.53 BLEU.5 One explanation is that the reference translations in the Books corpus are less literal than the ones for IWSLT, such that a weak feedback signal allows the learner to learn more efficiently than from full corrections. Appendix A.4 reports the results for offline evaluation on the trained Seq2Seq models on the Books test set. Comparison to Active Learning. A classic active learning strategy is to sample a subset of the input data for full labeling based on the uncertainty of the model predictions (Settles and Craven, 2008). The size of this subset, i.e. the amount of human labeling effort, has to be known and determined before learning. Figure 7 compares the self-regulators on the Books domain with models that learn from a fixed ratio of fullylabeled instances in every batch. These are chosen according to the model’s uncertainty, here measured by the average token entropy of the model’s best-scoring beam search hypothesis. The regulated models with a mix of feedback types clearly outperform the active learning strategies, 5With multiple epochs it would improve further, but we avoid showing the human the same inputs multiple times. 311 0.0 0.2 0.4 0.6 0.8 1.0 Cumulative Cost 1e8 14.0 14.2 14.4 14.6 14.8 BLEU type full 90% 70% 50% 30% 10% full/weak full/weak/self full/weak/self/none Figure 7: Learned self-regulation strategies in comparison to uncertainty-based active learning with a fixed percentage of full feedback on the Books domain. both in terms of cost-efficient learning (Figure 7) as well as in overall quality (See Figure 9 in Appendix A.5). We conclude that mixing feedback types, especially in the case where full feedback is less reliable, offers large improvements over standard stream-based active learning strategies. 4.3 Prospects for Field Studies Our experiments were designed as a pilot study to test the possibilities of self-regulated learning in simulation. In order to advance to field studies where human users interact with Seq2Seq models, several design choices have to be adapted with caution. Firstly, we simulate both feedback cost and quality improvement by measuring distances to static reference outputs. The experimental design in a field study has to account for a variation of feedback strength, feedback cost, and performance assessments, across time, across sentences, and across human users (Settles et al., 2008). One desideratum for field studies is thus to analyze this variation by analyzing the experimental results in a mixed effects model that accounts for variability across sentences, users, and annotation sessions (Baayen et al., 2008; Karimova et al., 2018). Secondly, our simulation of costs considers only the effort of the human teacher, not the machine learner. The strong preference for the cheapest feedback option might be a result of overestimating the cost of human post-editing and underestimating the cost of self-training. Thus, a model for field studies where data is limited might greatly benefit from learned estimates of feedback cost and quality improvement (Kreutzer et al., 2018). 5 Conclusion We proposed a cost-aware algorithm for interactive sequence-to-sequence learning, with a selfregulation module at its core that learns which type of feedback to query from a human teacher. The empirical study on interactive NMT with simulated human feedback showed that this selfregulated model finds more cost-efficient solutions than models learning from a single feedback type and uncertainty-based active learning models, also under domain shift. While this setup abstracts away from certain confounding variables to be expected in real-life interactive machine learning, it should be seen as a pilot experiment that allows focussing on our central research questions under an exact and noise-free computation of feedback cost and performance gain. The proposed framework can naturally be expanded to integrate more feedback modes suitable for the interaction with humans, e.g., pairwise comparisons or output rankings. Future research directions will involve the development of reinforcement learning model with multi-dimensional rewards, and modeling explicit credit assignment for improving the capabilities of the regulator to make context-sensitive decisions in mini-batch learning. Acknowledgements We would like to thank the anonymous reviewers for their valuable feedback. The research reported in this paper was supported in part by the German research foundation (DFG) under grant RI2221/4-1. References R Harald Baayen, Douglas J Davidson, and Douglas M Bates. 2008. Mixed-effects modeling with crossed random effects for subjects and items. Journal of memory and language, 59(4):390–412. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations (ICLR), San Diego, California, USA. Sergio Barrachina, Oliver Bender, Francisco Casacuberta, Jorge Civera, Elsa Cubel, Shahram Khadivi, Antonio Lagarda, Hermann Ney, Jes´us Tom´as, Enrique Vidal, and Juan-Miguel Vilar. 2009. Statistical approaches to computer-assisted translation. Computational Linguistics, 35(1). 312 Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S¨ackinger, and Roopak Shah. 1994. Signature verification using a ”siamese” time delay neural network. In Advances in Neural Information Processing Systems (NeurIPS), Denver, CO, USA. Carlos Celemin, Javier Ruiz-del Solar, and Jens Kober. 2019. A fast hybrid reinforcement learning framework with human corrective feedback. Autonomous Robots, 43(5):1173–1186. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Australia. Ching-An Cheng, Xinyan Yan, Nolan Wagener, and Byron Boots. 2018. Fast policy learning through imitation and reinforcement. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), Monterey, CA, USA. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium. Miguel Domingo, ´Alvaro Peris, and Francisco Casacuberta. 2017. Segment-based interactivepredictive machine translation. Machine Translation, 31(4):163–185. Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International Conference on Machine Learning (ICML), Vancouver, Canada. John Hattie and Gregory M. Donoghue. 2016. Learning strategies: a synthesis and conceptual model. NPJ Science of Learning, 1:16013–16013. John Hattie and Helen Timperley. 2007. The power of feedback. American Educational Research Association, 77(1):81–112. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. CoRR, abs/1712.05690. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. 2017. Reinforcement learning with unsupervised auxiliary tasks. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France. Sariya Karimova, Patrick Simianer, and Stefan Riezler. 2018. A user-study on online adaptation of neural machine translation to human post-edits. Machine Translation, 32(4):309–324. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diego, CA, USA. Julia Kreutzer, Joshua Uyheng, and Stefan Riezler. 2018. Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Australia. David Krueger, Jan Leike, Owain Evans, and John Salvatier. 2016. Active reinforcement learning: Observing rewards at a cost. In Proceeding of the 30th Conference on Neural Information Processing Systems (NeurIPS), Barcelona, Spain. Gaurav Kumar, George Foster, Colin Cherry, and Maxim Krikun. 2019. Reinforcement learning based curriculum optimization for neural machine translation. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Minneapolis, MN, USA. Tsz Kin Lam, Julia Kreutzer, and Stefan Riezler. 2018. A reinforcement learning approach to interactivepredictive neural machine translation. In Proceedings of the 21st Annual Conference of the European Association for Machine Translation (EAMT), Alicante, Spain. Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning to actively learn neural machine translation. In Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL), Brussels, Belgium. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal. James MacGlashan, Mark K. Ho, Robert Loftin, Bei Peng, Guan Wang, David L. Roberts, Matthew E. Taylor, and Michael L. Littman. 2017. Interactive learning from policy-dependent human feedback. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia. 313 T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In Proceedings of the 29th Conference on Artificial Intelligence (AAAI), Austin, TX, USA. Joel T. Nigg. 2017. Annual research review: On the relations among self-regulation, self-control, executive functioning, effortful control, cognitive control, impulsivity, risk-taking, and inhibition for developmental psychopathology. Journal of Child Psychology and Psychiatry, 58(4):361–383. Ernesto Panadero. 2017. A review of self-regulated learning: Six models and four directions of research. Frontiers in Psychology, 8(422):1–28. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL), Philadelphia, PA, USA. ´Alvaro Peris and Francisco Casacuberta. 2018. Active learning for interactive neural machine translation of data streams. In Proceedings of the 22nd Conference on Computational Natural Language Learning (CONLL), Brussels, Belgium. Pavel Petrushkov, Shahram Khadivi, and Evgeny Matusov. 2018. Learning from chunk-based feedback in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Australia. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation (WMT), Brussels, Belgium. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In Proceedings of the International Conference on Learning Representation (ICLR), San Juan, Puerto Rico. J¨urgen Schmidhuber, Jieyu Zhao, and Marco Wiering. 1996. Simple principles of metalaerning. Technical Report 69 96, IDSIA, Lugano, Switzerland. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany. Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Honolulu, Hawaii. Burr Settles, Mark Craven, and Lewis Friedland. 2008. Active learning with real annotation costs. In Proceedings of the NeurIPS Workshop on Cost-Sensitive Learning, Vancouver, Canada. Matthew J. A. Smith, Herke Van Hoof, and Joelle Pineau. 2018. An inference-based policy gradient method for learning options. In Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas (AMTA), volume 200. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NeurIPS), Montreal, Canada. Sebastian Thrun and Lorien Pratt, editors. 1998. Learning to Learn. Kluwer, Dortrecht, MA, USA. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC), Istanbul, Turkey. Marco Turchi, Matteo Negri, M Amin Farajian, and Marcello Federico. 2017. Continuous learning from human post-edits for neural machine translation. The Prague Bulletin of Mathematical Linguistics, 108(1):233–244. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA. Christopher Watkins. 1989. Learning from delayed rewards. PhD thesis, Cambridge University. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Lijun Wun, Fei Tian, Yingce Xia, Yang Fan, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018. Learning to teach with dynamic loss functions. In Proceeding of the 32nd Conference on Neural Information Processing System (NeuRIPS), Montreal, Canada. Barry J. Zimmerman and Dale H. Schunk, editors. 1989. Self-Regulated Learning and Academic Achievement. Springer, New York, NY, USA. 314 A Appendices A.1 Data de↔en WMT IWSLT Books Train 5,889,699 206,112 46,770 Dev 2,169 2,385 2,000 Test 3,004 1,138 2,000 Table 2: Number of sentences for parallel corpora used for pre-training (WMT), regulator training (IWSLT) and domain transfer evalution (Books). The WMT data is obtained from the WMT 2017 shared task website6 and pre-processed as described in Hieber et al. (2017). The preprocessing pipeline is used for IWSLT and Books data as well. IWSLT2017 is obtained from the evaluation campaign website.7 For validation on WMT, we use the newstest2015 data, for IWSLT tst2014+tst2015, for testing on WMT newstest2017 and tst2017 for IWSLT. Since there is no standard split for the Books corpus, we randomly select 2k sentences for validation and testing each. Table 2 gives an overview of the size of the three resources. A.2 Online Evaluation on IWSLT Figure 8 displays the development of BLEU over costs and time. A.3 Offline Evaluation on IWSLT Table 3 reports the offline held-out set evaluations for the early stopping points selected on the dev set for all feedback modes. All models notably improve over the baseline, only using full feedback leads to the overall best model on IWSLT (+0.6 BLEU / -0.6 TER), but costs a massive amounts of edits (417k characters). Selfregulating models still achieve improvements of 0.4–0.5 BLEU/TER with costs reduced up to a factor of 23. The reduction in cost is enabled by the use of cheaper feedback, here markings and self-supervision, which in isolation are successful as well. Self-supervision works surprisingly well, which makes it attractive for cheap but effective unsupervised domain adaptation. It has to be noted that both weak and self-supervision worked 6http://www.statmt.org/wmt17/ translation-task.html 7https://sites.google.com/site/ iwsltevaluation2017/ Model IWSLT dev IWSLT test BLEU↑ Cost↓ BLEU↑ TER↓ Baseline 28.28 24.84 62.42 Full 28.93±0.02 417k 25.60±0.02 61.86±0.03 Weak 28.65±0.01 32k 25.10±0.09 62.12±0.12 Self 28.58±0.02 25.33±0.06 61.96±0.05 Reg4 28.57±0.04 68k 25.23±0.05 62.02±0.12 Reg3 28.61±0.03 18k 25.23±0.09 62.07±0.06 Reg2 28.66±0.06 88k 25.27±0.09 61.91±0.06 Table 3: Evaluation of models at early stopping points. Results for three random seeds on IWSLT are averaged, reporting the standard deviation in the subscript. The translation of the dev set is obtained by greedy decoding (as during validation) and of the test set with beam search of width five. The costs are measured in character edits and clicks, as described in Section 4. only well when targets were pre-computed with the baseline model and held fixed during training. We suspect that the strong reward signal (ft = 1) for non-reference outputs leads otherwise to undesired local overfitting effects that a learner with online-generated targets cannot recover from. A.4 Domain Transfer Model Books test BLEU↑ TER↓ Cost↓ Baseline 14.19 79.81 Full 14.87 79.12 1B Weak 14.74 78.14 93M Self 14.73 78.86 Reg4 14.80 79.02 57M Reg3 14.80 78.70 41M Reg2 15.00 78.21 142M Table 4: Evaluation of models at early stopping points on the Books test set (beam search with width five). Table 4 reports results for test set evaluation on the Books domain of the best model from the IWSLT domain each. The baseline was trained on WMT parallel data without any regulation. The regulator was trained on IWSLT and evaluated on Books, the Seq2Seq model is further trained for one epoch on Books. The costs are measured in character edits and clicks. The best result in terms of BLEU and TER is achieved by the Reg2 model, even outperforming the model with full feedback. As observed for the IWSLT domain (cf. Section 4.2), self-training is very effective, but is outperformed by the Reg2 model and roughly on par 315 0 10000 20000 30000 40000 50000 60000 70000 80000 Cumulative Cost 28.3 28.4 28.5 28.6 28.7 28.8 28.9 29.0 BLEU type full full/weak full/weak/self full/weak/self/none (a) BLEU over cumulative costs. 200 300 400 500 600 700 Iterations +1.001e6 28.3 28.4 28.5 28.6 28.7 28.8 28.9 29.0 BLEU type full full/weak full/weak/self full/weak/self/none (b) BLEU over time. Figure 8: Regulation variants evaluated in terms of BLEU over time (a) and cumulative costs (b). Iteration counts start from the iteration count of the baseline model. One iteration on IWSLT equals training on one mini-batch of 32 instances. The BLEU score is computed on the tokenized validation set with greedy decoding. In (b) the lines correspond to the means over three runs, the shaded area depicts the estimated 95% confidence interval. with the Reg3 model. A.5 Active Learning on Books 1001200 1001400 1001600 1001800 1002000 1002200 1002400 Iterations 14.0 14.2 14.4 14.6 14.8 BLEU type full 90% 70% 50% 30% 10% full/weak full/weak/self full/weak/self/none Figure 9: Development of validation BLEU over time for learned regulation strategies in comparison to active learning with a fixed percentage γ of full feedback. Counting of iterations starts at the previous iteration count of the baseline model. Figure 9 shows the development of BLEU over time for the regulators and active learning strategies with a fixed ratio of full feedback per batch (γ ∈[10, 30, 50, 70, 90]). The decision whether to label an instance in a batch is made based on the average token entropy of the model’s current hypothesis. Using only 50% of the fully-supervised labels achieves the same quality as 100% using this uncertainty-based active learning sampling strategy. However, the regulated models reach a higher quality not only at a lower cost (see Figure 7), but also reach an overall higher quality. A.6 Regulation Strategies on IWSLT 200 300 400 500 600 700 800 Iterations +1.001e6 28.3 28.4 28.5 28.6 28.7 BLEU full/weak 200 300 400 500 600 700 800 +1.001e6 0 20 40 60 80 100 % of feedback weak full Figure 10: Feedback chosen by Reg2 on IWSLT. 200 300 400 500 600 700 800 Iterations +1.001e6 28.3 28.4 28.5 28.6 BLEU full/weak/self/none 200 300 400 500 600 700 800 +1.001e6 0 20 40 60 80 100 % of feedback none self weak full Figure 11: Feedback chosen by Reg4 on IWSLT. Figures 10 and 11 show the ratio of feedback types for self-regulation during training with Reg2 and Reg4 respectively.
2019
29